Science.gov

Sample records for a-weighted sound level

  1. A-Weighted Sound Levels in Cockpits of Fixed- and Rotary-Wing Aircraft.

    DTIC Science & Technology

    fixed-wing vehicles and from 98 to 106 dB for helicopters. Means and standard deviations are reported by octave-bands, all-pass (flat), A - levels , and...preferred speech interference levels (PSIL, average of 500, 1000 and 2000 Hz). Also, at-the-ear A - levels are reported for generalized amounts of attenuation provided by headsets commonly worn in aircraft. (Author)

  2. Comparing Average Levels and Peak Occurrence of Overnight Sound in the Medical Intensive Care Unit on A-weighted and C-weighted Decibel Scales

    PubMed Central

    Knauert, Melissa; Jeon, Sangchoon; Murphy, Terrence E.; Yaggi, H. Klar; Pisani, Margaret A.; Redeker, Nancy S.

    2016-01-01

    Purpose Sound levels in the intensive care unit (ICU) are universally elevated and are believed to contribute to sleep and circadian disruption. The purpose of this study is to compare overnight ICU sound levels and peak occurrence on A- versus C-weighted scales. Materials and Methods This was a prospective observational study of overnight sound levels in 59 medical ICU patient rooms. Sound level was recorded every 10 seconds on A- and C-weighted decibel scales. Equivalent sound level (Leq) and sound peaks were reported for full and partial night periods. Results The overnight A-weighted Leq of 53.6 dBA was well above World Health Organization (WHO) recommendations; overnight C-weighted Leq was 63.1 dBC (no WHO recommendations). Peak sound occurrence ranged from 1.8 to 23.3 times per hour. Illness severity, mechanical ventilation and delirium were not associated with Leq or peak occurrence. Leq and peak measures for A- and C-weighted decibel scales were significantly different from each other. Conclusions Sound levels in the medical ICU are high throughout the night. Patient factors were not associated with Leq or peak occurrence. Significant discordance between A- and C-weighted values suggests that low frequency sound is a meaningful factor in the medical ICU environment. PMID:27546739

  3. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  4. Developing a weighted measure of speech sound accuracy.

    PubMed

    Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J

    2011-02-01

    To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.

  5. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  6. School Sound Level Study.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    California has conducted on-site sound surveys of 36 different schools to determine the degree of noise, and thus disturbance, within the learning environment. This report provides the methodology and results of the survey, including descriptive charts and graphs illustrating typical desirable and undesirable sound levels. Results are presented…

  7. A Description of Methodologies Used in Estimation of A-Weighted Sound Levels for FAA Advisory Circular AC-36-3B.

    DTIC Science & Technology

    1982-01-01

    second) Dia propeller diameter (expressed in inches) T°F air temperature in degrees Farenheit T°C air temperature in degrees Celsius T:dBA total dBA...eMpiriC31 function to the absolute noise level ordinate. The term 240 log ( MH is the most sensitive and important part of the equation. The constant (240...standard day, zero wind, dry, zero gradient runway, at a sea level airport. 2. All aircraft operate at maximum takeoff gross weight. 3. All aircraft climb

  8. Sound Levels in East Texas Schools.

    ERIC Educational Resources Information Center

    Turner, Aaron Lynn

    A survey of sound levels was taken in several Texas schools to determine the amount of noise and sound present by size of class, type of activity, location of building, and the presence of air conditioning and large amounts of glass. The data indicate that class size and relative amounts of glass have no significant bearing on the production of…

  9. Sound pressure level in a municipal preschool

    PubMed Central

    Kemp, Adriana Aparecida Tahara; Delecrode, Camila Ribas; Guida, Heraldo Lorena; Ribeiro, André Knap; Cardoso, Ana Claúdia Vieira

    2013-01-01

    Summary Aim: To evaluate the sound pressure level to which preschool students are exposed. Method: This was a prospective, quantitative, nonexperimental, and descriptive study. To achieve the aim of the study we used an audio dosimeter. The sound pressure level (SPL) measurements were obtained for 2 age based classrooms. Preschool I and II. The measurements were obtained over 4 days in 8-hour sessions, totaling 1920 minutes. Results: Compared with established standards, the SPL measured ranged from 40.6 dB (A) to 105.8 dB (A). The frequency spectrum of the SPL was concentrated in the frequency range between 500 Hz and 4000 Hz. The older children produced higher SPLs than the younger ones, and the levels varied according to the activity performed. Painting and writing were the quietest activities, while free activities period and games were the noisiest. Conclusion: The SPLs measured at the preschool were higher and exceeded the maximum permitted level according to the reference standards. Therefore, the implementation of actions that aim to minimize the negative impact of noise in this environment is essential. PMID:25992013

  10. Sound level exposure of high-risk infants in different environmental conditions.

    PubMed

    Byers, Jacqueline F; Waugh, W Randolph; Lowman, Linda B

    2006-01-01

    To provide descriptive information about the sound levels to which high-risk infants are exposed in various actual environmental conditions in the NICU, including the impact of physical renovation on sound levels, and to assess the contributions of various types of equipment, alarms, and activities to sound levels in simulated conditions in the NICU. Descriptive and comparative design. Convenience sample of 134 infants at a southeastern quarternary children's hospital. A-weighted decibel (dBA) sound levels under various actual and simulated environmental conditions. The renovated NICU was, on average, 4-6 dBA quieter across all environmental conditions than a comparable nonrenovated room, representing a significant sound level reduction. Sound levels remained above consensus recommendations despite physical redesign and staff training. Respiratory therapy equipment, alarms, staff talking, and infant fussiness contributed to higher sound levels. Evidence-based sound-reducing strategies are proposed. Findings were used to plan environment management as part of a developmental, family-centered care, performance improvement program and in new NICU planning.

  11. High sound pressure levels in Bavarian discotheques remain after introduction of voluntary agreements.

    PubMed

    Twardella, Dorothee; Wellhoefer, Andrea; Brix, Jutta; Fromme, Hermann

    2008-01-01

    While no legal rules or regulations exist in Germany, voluntary measures were introduced to achieve a reduction of sound pressure levels in discotheques to levels below 100 dB(A). To evaluate the current levels in Bavarian discotheques and to find out whether these voluntary measures ensured compliance with the recommended limits, sound pressure levels were measured in 20 Bavarian discotheques between 11 p.m. and 2 a.m. With respect to the equivalent continuous A-weighted sound pressure level for each 30-minute period (L Aeq,30min ), only 4/20 discotheques remained below the limit of 100 dB(A) in all time periods. Ten discotheques had sound pressure levels below 100 dB(A) for the total measurement period (L Aeq,180min ). None of the evaluated factors (weekday, size, estimated age of attendees, the use of voluntary measures such as participation of disc jockeys in a tutorial, or the availability of a sound level meter for the DJs) was significantly associated with the maximal L Aeq, 30min . Thus, the introduction of voluntary measures was not sufficient to ensure compliance with the recommended limits of sound pressure levels.

  12. Decoding sound level in the marmoset primary auditory cortex.

    PubMed

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-10-01

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.

  13. The influence of neonatal intensive care unit design on sound level.

    PubMed

    Chen, Hsin-Li; Chen, Chao-Huei; Wu, Chih-Chao; Huang, Hsiu-Jung; Wang, Teh-Ming; Hsu, Chia-Chi

    2009-12-01

    Excessive noise in nurseries has been found to cause adverse effects in infants, especially preterm infants in neonatal intensive care units (NICUs). The NICU design may influence the background sound level. We compared the sound level in two differently designed spaces in one NICU. We hypothesized that the sound level in an enclosed space would be quieter than in an open space. Sound levels were measured continuously 24 hours a day in two separate spaces at the same time, one enclosed and one open. Sound-level meters were placed near beds in each room. Sound levels were expressed as decibels, A-weighted (dBA) and presented as hourly L(eq), L(max), L(10), and L(90). The hourly L(eq) in the open space (50.8-57.2dB) was greater than that of the enclosed space (45.9-51.7dB), with a difference of 0.4-10.4dB, and a mean difference of 4.5dB (p<0.0001). The hourly L(10), L(90), and L(max) in the open space also exceeded that in the enclosed space (p<0.0001). The sound level measured in the enclosed space was quieter than in the open space. The design of bed space should be taken into consideration when building a new NICU. Besides the design of NICU architecture, continuous monitoring of sound level in the NICU is important to maintain a quiet environment.

  14. Analysis of sound pressure levels emitted by children's toys.

    PubMed

    Sleifer, Pricila; Gonçalves, Maiara Santos; Tomasi, Marinês; Gomes, Erissandra

    2013-06-01

    To verify the levels of sound pressure emitted by non-certified children's toys. Cross-sectional study of sound toys available at popular retail stores of the so-called informal sector. Electronic, mechanical, and musical toys were analyzed. The measurement of each product was carried out by an acoustic engineer in an acoustically isolated booth, by a decibel meter. To obtain the sound parameters of intensity and frequency, the toys were set to produce sounds at a distance of 10 and 50cm from the researcher's ear. The intensity of sound pressure [dB(A)] and the frequency in hertz (Hz) were measured. 48 toys were evaluated. The mean sound pressure 10cm from the ear was 102±10 dB(A), and at 50cm, 94±8 dB(A), with p<0.05. The level of sound pressure emitted by the majority of toys was above 85dB(A). The frequency ranged from 413 to 6,635Hz, with 56.3% of toys emitting frequency higher than 2,000Hz. The majority of toys assessed in this research emitted a high level of sound pressure.

  15. Recovery of Neonatal Head Turning to Decreased Sound Pressure Level.

    ERIC Educational Resources Information Center

    Tarquinio, Nancy; And Others

    1990-01-01

    Investigated newborns' responses to decreased sound pressure level (SPL) by means of a localized head turning habituation procedure. Findings, which demonstrated recovery of neonatal head turning to decreased SPL, were inconsistent with the selective receptor adaptation model. (RH)

  16. Determination of a sound level for railroad horn regulatory compliance.

    DOT National Transportation Integrated Search

    2002-10-31

    The Federal Railroad Administration (FRA) has undertaken a rulemaking process to address the use of locomotive horns at public highway-railroad grade crossings. This rule includes a provision to regulate the sound level output of railroad horns. This...

  17. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    PubMed

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  18. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations

    PubMed Central

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618

  19. Sound Levels and Risk Perceptions of Music Students During Classes.

    PubMed

    Rodrigues, Matilde A; Amorim, Marta; Silva, Manuela V; Neves, Paula; Sousa, Aida; Inácio, Octávio

    2015-01-01

    It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians' exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.

  20. Behavioral response of manatees to variations in environmental sound levels

    USGS Publications Warehouse

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  1. Wind turbine sound pressure level calculations at dwellings.

    PubMed

    Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits

    2016-03-01

    This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.

  2. Audio spectrum and sound pressure levels vary between pulse oximeters.

    PubMed

    Chandra, Deven; Tessler, Michael J; Usher, John

    2006-01-01

    The variable-pitch pulse oximeter is an important intraoperative patient monitor. Our ability to hear its auditory signal depends on its acoustical properties and our hearing. This study quantitatively describes the audio spectrum and sound pressure levels of the monitoring tones produced by five variable-pitch pulse oximeters. We compared the Datex-Ohmeda Capnomac Ultima, Hewlett-Packard M1166A, Datex-Engstrom AS/3, Ohmeda Biox 3700, and Datex-Ohmeda 3800 oximeters. Three machines of each of the five models were assessed for sound pressure levels (using a precision sound level meter) and audio spectrum (using a hanning windowed fast Fourier trans-form of three beats at saturations of 99%, 90%, and 85%). The widest range of sound pressure levels was produced by the Hewlett-Packard M1166A (46.5 +/- 1.74 dB to 76.9 +/- 2.77 dB). The loudest model was the Datex-Engstrom AS/3 (89.2 +/- 5.36 dB). Three oximeters, when set to the lower ranges of their volume settings, were indistinguishable from background operating room noise. Each model produced sounds with different audio spectra. Although each model produced a fundamental tone with multiple harmonic overtones, the number of harmonics varied with each model; from three harmonic tones on the Hewlett-Packard M1166A, to 12 on the Ohmeda Biox 3700. There were variations between models, and individual machines of the same model with respect to the fundamental tone associated with a given saturation. There is considerable variance in the sound pressure and audio spectrum of commercially-available pulse oximeters. Further studies are warranted in order to establish standards.

  3. Range of sound levels in the outdoor environment

    Treesearch

    Lewis S. Goodfriend

    1977-01-01

    Current methods of measuring and rating noise in a metropolitan area are examined, including real-time spectrum analysis and sound-level integration, producing a single-number value representing the noise impact for each hour or each day. Methods of noise rating for metropolitan areas are reviewed, and the various measures from multidimensional rating methods such as...

  4. Sound absorption of a porous material with a perforated facing at high sound pressure levels

    NASA Astrophysics Data System (ADS)

    Peng, Feng

    2018-07-01

    A semi-empirical model is proposed to predict the sound absorption of an acoustical unit consisting of a rigid-porous material layer with a perforated facing under the normal incidence at high sound pressure levels (SPLs) of pure tones. The nonlinearity of the perforated facing and the porous material, and the interference between them are considered in the model. The sound absorptive performance of the acoustical unit is tested at different incident SPLs and in three typical configurations: 1) when the perforated panel (PP) directly contacts with the porous layer, 2) when the PP is separated from the porous layer by an air gap and 3) when an air cavity is set between the porous material and the hard backing wall. The test results agree well with the corresponding theoretical predictions. Moreover, the results show that the interference effect is correlated to the width of the air gap between the PP and the porous layer, which alters not only the linear acoustic impedance but also the nonlinear acoustic impedance of the unit and hence its sound absorptive properties.

  5. Attention modifies sound level detection in young children.

    PubMed

    Sussman, Elyse S; Steinschneider, Mitchell

    2011-07-01

    Have you ever shouted your child's name from the kitchen while they were watching television in the living room to no avail, so you shout their name again, only louder? Yet, still no response. The current study provides evidence that young children process loudness changes differently than pitch changes when they are engaged in another task such as watching a video. Intensity level changes were physiologically detected only when they were behaviorally relevant, but frequency level changes were physiologically detected without task relevance in younger children. This suggests that changes in pitch rather than changes in volume may be more effective in evoking a response when sounds are unexpected. Further, even though behavioral ability may appear to be similar in younger and older children, attention-based physiologic responses differ from automatic physiologic processes in children. Results indicate that 1) the automatic auditory processes leading to more efficient higher-level skills continue to become refined through childhood; and 2) there are different time courses for the maturation of physiological processes encoding the distinct acoustic attributes of sound pitch and sound intensity. The relevance of these findings to sound perception in real-world environments is discussed.

  6. A Sound Pressure-level Meter Without Amplification

    NASA Technical Reports Server (NTRS)

    Stowell, E Z

    1937-01-01

    The N.A.C.A. has developed a simple pressure-level meter for the measurement of sound-pressure levels above 70 db. The instrument employs a carbon microphone but has no amplification. The source of power is five flashlight batteries. Measurements may be made up to the threshold of feeling with an accuracy of plus or minus 2 db; band analysis of complex spectra may be made if desired.

  7. Operating room sound level hazards for patients and physicians.

    PubMed

    Fritsch, Michael H; Chacko, Chris E; Patterson, Emily B

    2010-07-01

    Exposure to certain new surgical instruments and operating room devices during procedures could cause hearing damage to patients and personnel. Surgical instruments and related equipment generate significant sound levels during routine usage. Both patients and physicians are exposed to these levels during the operative cases, many of which can last for hours. The noise loads during cases are cumulative. Occupational Safety and Health Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH) standards are inconsistent in their appraisals of potential damage. Implications of the newer power instruments are not widely recognized. Bruel and Kjaer sound meter spectral recordings for 20 major instruments from 5 surgical specialties were obtained at the ear levels for the patient and the surgeon between 32 and 20 kHz. Routinely used instruments generated sound levels as high as 131 dB. Patient and operator exposures differed. There were unilateral dominant exposures. Many instruments had levels that became hazardous well within the length of an average surgical procedure. The OSHA and NIOSH systems gave contradicting results when applied to individual instruments and types of cases. Background noise, especially in its intermittent form, was also of significant nature. Some patients and personnel have additional predisposing physiologic factors. Instrument noise levels for average length surgical cases may exceed OSHA and NIOSH recommendations for hearing safety. Specialties such as Otolaryngology, Orthopedics, and Neurosurgery use instruments that regularly exceed limits. General operating room noise also contributes to overall personnel exposures. Innovative countermeasures are suggested.

  8. Sound levels in conservative dentistry and endodontics clinic

    PubMed Central

    Dutta, Arindam; Mala, Kundabala; Acharya, Shashi Rashmi

    2013-01-01

    Aim: To evaluate the sound levels generated in dental clinics of conservative dentistry and endodontics. Material and Methods: A decibel-meter with digital readout was used to measure sound levels at different time intervals at the chairside and at the center of the clinic. Minimum and maximum readings during a 3 min interval were recorded. Results: In the post-graduate (PG) clinic, there was significant difference in noise levels between the chairside (66-81 dB[A]) and the center of the clinic (66-67 dB[A]) at certain times. In the under graduate (UG) clinic, noise levels with suction and either high/slow speed handpieces (67-80 dB[A]) were significantly higher than the center of clinic. Suction alone in the UG clinic (63-75 dB[A]) was significantly quieter than in the PG clinic (69-79 dB[A]). Conclusions: (1) Mean sound levels in the working clinics ranged from 63.0 dB[A] to 81.5 dB[A]. These are within the recommended range for dental equipment. (2) With suction and either low/high speed handpiece combination, the PG clinic was significantly noisier than the UG clinic at several time periods. PMID:23716962

  9. Prediction of light aircraft interior sound pressure level from the measured sound power flowing in to the cabin

    NASA Technical Reports Server (NTRS)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1986-01-01

    The validity of the room equation of Crocker and Price (1982) for predicting the cabin interior sound pressure level was experimentally tested using a specially constructed setup for simultaneous measurements of transmitted sound intensity and interior sound pressure levels. Using measured values of the reverberation time and transmitted intensities, the equation was used to predict the space-averaged interior sound pressure level for three different fuselage conditions. The general agreement between the room equation and experimental test data is considered good enough for this equation to be used for preliminary design studies.

  10. The softest sound levels of the human voice in normal subjects.

    PubMed

    Šrámková, Hana; Granqvist, Svante; Herbst, Christian T; Švec, Jan G

    2015-01-01

    Accurate measurement of the softest sound levels of phonation presents technical and methodological challenges. This study aimed at (1) reliably obtaining normative data on sustained softest sound levels for the vowel [a:] at comfortable pitch; (2) comparing the results for different frequency and time weighting methods; and (3) refining the Union of European Phoniatricians' recommendation on allowed background noise levels for scientific and equipment manufacturers' purposes. Eighty healthy untrained participants (40 females, 40 males) were investigated in quiet rooms using a head-mounted microphone and a sound level meter at 30 cm distance. The one-second-equivalent sound levels were more stable and more representative for evaluating the softest sustained phonations than the fast-time-weighted levels. At 30 cm, these levels were in the range of 48-61 dB(C)/41-53 dB(A) for females and 49 - 64 dB(C)/35-53 dB(A) for males (5% to 95% quantile range). These ranges may serve as reference data in evaluating vocal normality. In order to reach a signal-to-noise ratio of at least 10 dB for more than 95% of the normal population, the background noise should be below 25 dB(A) and 38 dB(C), respectively, for the softest phonation measurements at 30 cm distance. For the A-weighting, this is 15 dB lower than the previously recommended value.

  11. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 5 2012-10-01 2012-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  12. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 5 2013-10-01 2013-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  13. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 of this...

  14. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  15. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time.

  16. Effects of sound-field frequency modulation amplification on reducing teachers' sound pressure level in the classroom.

    PubMed

    Sapienza, C M; Crandell, C C; Curtis, B

    1999-09-01

    Voice problems are a frequent difficulty that teachers experience. Common complaints by teachers include vocal fatigue and hoarseness. One possible explanation for these symptoms is prolonged elevations in vocal loudness within the classroom. This investigation examined the effectiveness of sound-field frequency modulation (FM) amplification on reducing the sound pressure level (SPL) of the teacher's voice during classroom instruction. Specifically, SPL was examined during speech produced in a classroom lecture by 10 teachers with and without the use of sound-field amplification. Results indicated a significant 2.42-dB decrease in SPL with the use of sound-field FM amplification. These data support the use of sound-field amplification in the vocal hygiene regimen recommended to teachers by speech-language pathologists.

  17. Prediction of light aircraft interior sound pressure level using the room equation

    NASA Technical Reports Server (NTRS)

    Atwal, M.; Bernhard, R.

    1984-01-01

    The room equation is investigated for predicting interior sound level. The method makes use of an acoustic power balance, by equating net power flow into the cabin volume to power dissipated within the cabin using the room equation. The sound power level transmitted through the panels was calculated by multiplying the measured space averaged transmitted intensity for each panel by its surface area. The sound pressure level was obtained by summing the mean square sound pressures radiated from each panel. The data obtained supported the room equation model in predicting the cabin interior sound pressure level.

  18. Sound power and vibration levels for two different piano soundboards

    NASA Astrophysics Data System (ADS)

    Squicciarini, Giacomo; Valiente, Pablo Miranda; Thompson, David J.

    2016-09-01

    This paper compares the sound power and vibration levels for two different soundboards for upright pianos. One of them is made of laminated spruce and the other of solid spruce (tone-wood). These differ also in the number of ribs and manufacturing procedure. The methodology used is defined in two major steps: (i) acoustic power due to a unit force is obtained reciprocally by measuring the acceleration response of the piano soundboards when excited by acoustic waves in reverberant field; (ii) impact tests are adopted to measure driving point and spatially-averaged mean-square transfer mobility. The results show that, in the midhigh frequency range, the soundboard made of solid spruce has a greater vibrational and acoustic response than the laminated soundboard. The effect of string tension is also addressed, showing that is only relevant at low frequencies.

  19. Sound pressure level gain in an acoustic metamaterial cavity.

    PubMed

    Song, Kyungjun; Kim, Kiwon; Hur, Shin; Kwak, Jun-Hyuk; Park, Jihyun; Yoon, Jong Rak; Kim, Jedo

    2014-12-11

    The inherent attenuation of a homogeneous viscous medium limits radiation propagation, thereby restricting the use of many high-frequency acoustic devices to only short-range applications. Here, we design and experimentally demonstrate an acoustic metamaterial localization cavity which is used for sound pressure level (SPL) gain using double coiled up space like structures thereby increasing the range of detection. This unique behavior occurs within a subwavelength cavity that is 1/10(th) of the wavelength of the incident acoustic wave, which provides up to a 13 dB SPL gain. We show that the amplification results from the Fabry-Perot resonance of the cavity, which has a simultaneously high effective refractive index and effective impedance. We also experimentally verify the SPL amplification in an underwater environment at higher frequencies using a sample with an identical unit cell size. The versatile scalability of the design shows promising applications in many areas, especially in acoustic imaging and underwater communication.

  20. Sound

    NASA Astrophysics Data System (ADS)

    Capstick, J. W.

    2013-01-01

    1. The nature of sound; 2. Elasticity and vibrations; 3. Transverse waves; 4. Longitudinal waves; 5. Velocity of longitudinal waves; 6. Reflection and refraction. Doppler's principle; 7. Interference. Beats. Combination tones; 8. Resonance and forced vibrations; 9. Quality of musical notes; 10. Organ pipes; 11. Rods. Plates. Bells; 12. Acoustical measurements; 13. The phonograph, microphone and telephone; 14. Consonance; 15. Definition of intervals. Scales. Temperament; 16. Musical instruments; 17. Application of acoustical principles to military purposes; Questions; Answers to questions; Index.

  1. Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.

    PubMed

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-03-01

    High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.

  2. Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran

    PubMed Central

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-01-01

    Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706

  3. Level of interest in a weight management program among adult U.S. military dependents

    USDA-ARS?s Scientific Manuscript database

    There is little information on the extent to which different challenged populations with high rates of overweight and obesity have interest in participating in weight management programs. The purpose of this study was to identify potential rates of enrollment in a weight management program among adu...

  4. Using Lighting Levels to Control Sound Levels in a College Library.

    ERIC Educational Resources Information Center

    Hronek, Beth

    1997-01-01

    Many libraries have noise problems that can't be fixed with ceiling and carpet treatments, physical arrangement, or sound barriers. This study at Henderson Community College (Henderson KY) attempted to confirm results from an earlier study suggesting that reducing light levels led to reduced noise. The data showed mixed results, but overall the…

  5. Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus

    PubMed Central

    Delgutte, Bertrand

    2015-01-01

    At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292

  6. Sound Pressure Level Gain in an Acoustic Metamaterial Cavity

    PubMed Central

    Song, Kyungjun; Kim, Kiwon; Hur, Shin; Kwak, Jun-Hyuk; Park, Jihyun; Yoon, Jong Rak; Kim, Jedo

    2014-01-01

    The inherent attenuation of a homogeneous viscous medium limits radiation propagation, thereby restricting the use of many high-frequency acoustic devices to only short-range applications. Here, we design and experimentally demonstrate an acoustic metamaterial localization cavity which is used for sound pressure level (SPL) gain using double coiled up space like structures thereby increasing the range of detection. This unique behavior occurs within a subwavelength cavity that is 1/10th of the wavelength of the incident acoustic wave, which provides up to a 13 dB SPL gain. We show that the amplification results from the Fabry-Perot resonance of the cavity, which has a simultaneously high effective refractive index and effective impedance. We also experimentally verify the SPL amplification in an underwater environment at higher frequencies using a sample with an identical unit cell size. The versatile scalability of the design shows promising applications in many areas, especially in acoustic imaging and underwater communication. PMID:25502279

  7. Exterior sound level measurements of snowcoaches at Yellowstone National Park

    DOT National Transportation Integrated Search

    2010-04-01

    Sounds associated with oversnow vehicles, such as snowmobiles and snowcoaches, are an important management concern at Yellowstone and Grand Teton National Parks. The John A. Volpe National Transportation Systems Centers Environmental Measurement a...

  8. The Importance of Ambient Sound Level to Characterise Anuran Habitat

    PubMed Central

    Goutte, Sandra; Dubois, Alain; Legendre, Frédéric

    2013-01-01

    Habitat characterisation is a pivotal step of any animal ecology study. The choice of variables used to describe habitats is crucial and need to be relevant to the ecology and behaviour of the species, in order to reflect biologically meaningful distribution patterns. In many species, acoustic communication is critical to individuals’ interactions, and it is expected that ambient acoustic conditions impact their local distribution. Yet, classic animal ecology rarely integrates an acoustic dimension in habitat descriptions. Here we show that ambient sound pressure level (SPL) is a strong predictor of calling site selection in acoustically active frog species. In comparison to six other habitat-related variables (i.e. air and water temperature, depth, width and slope of the stream, substrate), SPL had the most important explanatory power in microhabitat selection for the 34 sampled species. Ambient noise was particularly useful in differentiating two stream-associated guilds: torrents and calmer streams dwelling species. Guild definitions were strongly supported by SPL, whereas slope, which is commonly used in stream-associated habitat, had a weak explanatory power. Moreover, slope measures are non-standardized across studies and are difficult to assess at small scale. We argue that including an acoustic descriptor will improve habitat-species analyses for many acoustically active taxa. SPL integrates habitat topology and temporal information (such as weather and hour of the day, for example) and is a simple and precise measure. We suggest that habitat description in animal ecology should include an acoustic measure such as noise level because it may explain previously misunderstood distribution patterns. PMID:24205070

  9. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.

    PubMed

    Bolle, Loes J; de Jong, Christ A F; Bierman, Stijn M; van Beek, Pieter J G; van Keeken, Olvin A; Wessels, Peter W; van Damme, Cindy J G; Winter, Hendrik V; de Haan, Dick; Dekeling, René P A

    2012-01-01

    In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2) (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa(2)s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2)s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.

  10. Common Sole Larvae Survive High Levels of Pile-Driving Sound in Controlled Exposure Experiments

    PubMed Central

    Bolle, Loes J.; de Jong, Christ A. F.; Bierman, Stijn M.; van Beek, Pieter J. G.; van Keeken, Olvin A.; Wessels, Peter W.; van Damme, Cindy J. G.; Winter, Hendrik V.; de Haan, Dick; Dekeling, René P. A.

    2012-01-01

    In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa2 (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised. PMID:22431996

  11. Measurements of humpback whale song sound levels received by a calf in association with a singer.

    PubMed

    Chen, Jessica; Pack, Adam A; Au, Whitlow W L; Stimpert, Alison K

    2016-11-01

    Male humpback whales produce loud "songs" on the wintering grounds and some sing while escorting mother-calf pairs, exposing them to near-continuous sounds at close proximity. An Acousonde acoustic and movement recording tag deployed on a calf off Maui, Hawaii captured sounds produced by a singing male escort. Root-mean-square received levels ranged from 126 to 158 dB re 1 μPa. These levels represent rare direct measurements of sound to which a newly born humpback calf may be naturally exposed by a conspecific, and may provide a basis for informed decisions regarding anthropogenic sound levels projected near calves.

  12. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  13. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  14. Accuracy of assessing the level of impulse sound from distant sources.

    PubMed

    Wszołek, Tadeusz; Kłaczyński, Maciej

    2007-01-01

    Impulse sound events are characterised by ultra high pressures and low frequencies. Lower frequency sounds are generally less attenuated over a given distance in the atmosphere than higher frequencies. Thus, impulse sounds can be heard over greater distances and will be more affected by the environment. To calculate a long-term average immission level it is necessary to apply weighting factors like the probability of the occurrence of each weather condition during the relevant time period. This means that when measuring impulse noise at a long distance it is necessary to follow environmental parameters in many points along the way sound travels and also to have a database of sound transfer functions in the long term. The paper analyses the uncertainty of immission measurement results of impulse sound from cladding and destroying explosive materials. The influence of environmental conditions on the way sound travels is the focus of this paper.

  15. Aftereffects of Intense Low-Frequency Sound on Spontaneous Otoacoustic Emissions: Effect of Frequency and Level.

    PubMed

    Jeanson, Lena; Wiegrebe, Lutz; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2017-02-01

    The presentation of intense, low-frequency (LF) sound to the human ear can cause very slow, sinusoidal oscillations of cochlear sensitivity after LF sound offset, coined the "Bounce" phenomenon. Changes in level and frequency of spontaneous otoacoustic emissions (SOAEs) are a sensitive measure of the Bounce. Here, we investigated the effect of LF sound level and frequency on the Bounce. Specifically, the level of SOAEs was tracked for minutes before and after a 90-s LF sound exposure. Trials were carried out with several LF sound levels (93 to 108 dB SPL corresponding to 47 to 75 phons at a fixed frequency of 30 Hz) and different LF sound frequencies (30, 60, 120, 240 and 480 Hz at a fixed loudness level of 80 phons). At an LF sound frequency of 30 Hz, a minimal sound level of 102 dB SPL (64 phons) was sufficient to elicit a significant Bounce. In some subjects, however, 93 dB SPL (47 phons), the lowest level used, was sufficient to elicit the Bounce phenomenon and actual thresholds could have been even lower. Measurements with different LF sound frequencies showed a mild reduction of the Bounce phenomenon with increasing LF sound frequency. This indicates that the strength of the Bounce not only is a simple function of the spectral separation between SOAE and LF sound frequency but also depends on absolute LF sound frequency, possibly related to the magnitude of the AC component of the outer hair cell receptor potential.

  16. Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.

    2013-01-01

    Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278

  17. Evaluation of the Effects of Various Sound Pressure Levels on the Level of Serum Aldosterone Concentration in Rats

    PubMed Central

    Nassiri, Parvin; Zare, Sajad; Monazzam, Mohammad R.; Pourbakht, Akram; Azam, Kamal; Golmohammadi, Taghi

    2017-01-01

    Introduction: Noise exposure may have anatomical, nonauditory, and auditory influences. Considering nonauditory impacts, noise exposure can cause alterations in the automatic nervous system, including increased pulse rates, heightened blood pressure, and abnormal secretion of hormones. The present study aimed at examining the effect of various sound pressure levels (SPLs) on the serum aldosterone level among rats. Materials and Methods: A total of 45 adult male rats with an age range of 3 to 4 months and a weight of 200 ± 50 g were randomly divided into 15 groups of three. Three groups were considered as the control groups and the rest (i.e., 12 groups) as the case groups. Rats of the case groups were exposed to SPLs of 85, 95, and 105 dBA. White noise was used as the noise to which the rats were exposed. To measure the level of rats’ serum aldosterone, 3 mL of each rat’s sample blood was directly taken from the heart of anesthetized animals by using syringes. The taken blood samples were put in labeled test tubes that contained anticoagulant Ethylenediaminetetraacetic acid. In the laboratory, the level of aldosterone was assessed through Enzyme-linked immunosorbent assay protocol. The collected data were analyzed by the use of Statistical Package for Social Sciences (SPSS) version 18. Results: The results revealed that there was no significant change in the level of rats’ serum aldosterone as a result of exposure to SPLs of 65, 85, and 95 dBA. However, the level of serum aldosterone experienced a remarkable increase after exposure to the SPL of 105 dBA (P < 0.001). Thus, the SPL had a significant impact on the serum aldosterone level (P < 0.001). In contrast, the exposure time and the level of potassium in the used water did not have any measurable influence on the level of serum aldosterone (P = 0.25 and 0.39). Conclusion: The findings of this study demonstrated that serum aldosterone can be used as a biomarker in the face of sound exposure. PMID

  18. Simulating cartilage conduction sound to estimate the sound pressure level in the external auditory canal

    NASA Astrophysics Data System (ADS)

    Shimokura, Ryota; Hosoi, Hiroshi; Nishimura, Tadashi; Iwakura, Takashi; Yamanaka, Toshiaki

    2015-01-01

    When the aural cartilage is made to vibrate it generates sound directly into the external auditory canal which can be clearly heard. Although the concept of cartilage conduction can be applied to various speech communication and music industrial devices (e.g. smartphones, music players and hearing aids), the conductive performance of such devices has not yet been defined because the calibration methods are different from those currently used for air and bone conduction. Thus, the aim of this study was to simulate the cartilage conduction sound (CCS) using a head and torso simulator (HATS) and a model of aural cartilage (polyurethane resin pipe) and compare the results with experimental ones. Using the HATS, we found the simulated CCS at frequencies above 2 kHz corresponded to the average measured CCS from seven subjects. Using a model of skull bone and aural cartilage, we found that the simulated CCS at frequencies lower than 1.5 kHz agreed with the measured CCS. Therefore, a combination of these two methods can be used to estimate the CCS with high accuracy.

  19. Attenuation of Outdoor Sound Propagation Levels by a Snow Cover

    DTIC Science & Technology

    1993-11-01

    20 kN s m-4. Calculations of ground motion induced by the atmospheric sound waves were made using a viscoelastic model of the ground and the... Models of ground impedance. Past predictions of the material : effective flow resistivity a, porosity Q, acoustic pulse waveforms (Don and Cramond 1987...fibrous absorbent materials . Applied particle motion induced by a point source above a Acoustics, 3: 105-116. poroelastic half-space. Journal of the

  20. Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).

    PubMed

    Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M

    2013-07-01

    The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.

  1. Sound levels and their effects on children in a German primary school.

    PubMed

    Eysel-Gosepath, Katrin; Daut, Tobias; Pinger, Andreas; Lehmacher, Walter; Erren, Thomas

    2012-12-01

    Considerable sound levels are produced in primary schools by voices of children and resonance effects. As a consequence, hearing loss and mental impairment may occur. In a Cologne primary school, sound levels were measured in three different classrooms, each with 24 children, 8-10 years old, and one teacher. Sound dosimeters were positioned in the room and near the teacher's ear. Additional measurements were done in one classroom fully equipped with sound-absorbing materials. A questionnaire containing 12 questions about noise at school was distributed to 100 children, 8-10 years old. Measurements were repeated after children had been taught about noise damage and while "noise lights" were used. Mean sound levels of 5-h per day measuring period were 78 dB (A) near the teacher's ear and 70 dB (A) in the room. The average of all measured maximal sound levels for 1 s was 105 dB (A) for teachers, and 100 dB (A) for rooms. In the soundproofed classroom, Leq was 66 dB (A). The questionnaire revealed certain judgment of the children concerning situations with high sound levels and their ability to develop ideas for noise reduction. However, no clear sound level reduction was identified after noise education and using "noise lights" during lessons. Children and their teachers are equally exposed to high sound levels at school. Early sensitization to noise and the possible installation of sound-absorbing materials can be important means to prevent noise-associated hearing loss and mental impairment.

  2. A national project to evaluate and reduce high sound pressure levels from music.

    PubMed

    Ryberg, Johanna Bengtsson

    2009-01-01

    The highest recommended sound pressure levels for leisure sounds (music) in Sweden are 100 dB LAeq and 115 dB LAFmax for adults, and 97 dB LAeq and 110 dB LAFmax where children under the age of 13 have access. For arrangements intended for children, levels should be consistently less than 90 dB LAeq. In 2005, a national project was carried out with the aim of improving environments with high sound pressure levels from music, such as concert halls, restaurants, and cinemas. The project covered both live and recorded music. Of Sweden's 290 municipalities, 134 took part in the project, and 93 of these carried out sound measurements. Four hundred and seventy one establishments were investigated, 24% of which exceeded the highest recommended sound pressure levels for leisure sounds in Sweden. Of festival and concert events, 42% exceeded the recommended levels. Those who visit music events/establishments thus run a relatively high risk of exposure to harmful sound levels. Continued supervision in this field is therefore crucial.

  3. Calculating far-field radiated sound pressure levels from NASTRAN output

    NASA Technical Reports Server (NTRS)

    Lipman, R. R.

    1986-01-01

    FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.

  4. Hearing Tests on Mobile Devices: Evaluation of the Reference Sound Level by Means of Biological Calibration.

    PubMed

    Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz

    2016-05-30

    Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3

  5. Effect of sound level on virtual and free-field localization of brief sounds in the anterior median plane.

    PubMed

    Marmel, Frederic; Marrufo-Pérez, Miriam I; Heeren, Jan; Ewert, Stephan; Lopez-Poveda, Enrique A

    2018-06-14

    The detection of high-frequency spectral notches has been shown to be worse at 70-80 dB sound pressure level (SPL) than at higher levels up to 100 dB SPL. The performance improvement at levels higher than 70-80 dB SPL has been related to an 'ideal observer' comparison of population auditory nerve spike trains to stimuli with and without high-frequency spectral notches. Insofar as vertical localization partly relies on information provided by pinna-based high-frequency spectral notches, we hypothesized that localization would be worse at 70-80 dB SPL than at higher levels. Results from a first experiment using a virtual localization set-up and non-individualized head-related transfer functions (HRTFs) were consistent with this hypothesis, but a second experiment using a free-field set-up showed that vertical localization deteriorates monotonically with increasing level up to 100 dB SPL. These results suggest that listeners use different cues when localizing sound sources in virtual and free-field conditions. In addition, they confirm that the worsening in vertical localization with increasing level continues beyond 70-80 dB SPL, the highest levels tested by previous studies. Further, they suggest that vertical localization, unlike high-frequency spectral notch detection, does not rely on an 'ideal observer' analysis of auditory nerve spike trains. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Principal cells of the brainstem's interaural sound level detector are temporal differentiators rather than integrators.

    PubMed

    Franken, Tom P; Joris, Philip X; Smith, Philip H

    2018-06-14

    The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.

  7. Sensitivity of the mouse to changes in azimuthal sound location: Angular separation, spectral composition, and sound level

    PubMed Central

    Allen, Paul D.; Ison, James R.

    2010-01-01

    Auditory spatial acuity was measured in mice using prepulse inhibition (PPI) of the acoustic startle reflex (ASR) as the indicator response for stimulus detection. The prepulse was a “speaker swap” (SSwap), shifting a noise between two speakers located along the azimuth. Their angular separation, and the spectral composition and sound level of the noise were varied, as was the interstimulus interval (ISI) between SSwap and ASR elicitation. In Experiment 1 a 180° SSwap of wide band noise (WBN) was compared with WBN Onset and Offset. SSwap and WBN Onset had near equal effects, but less than Offset. In Experiment 2 WBN SSwap was measured with speaker separations of 15°, 22.5°, 45°, and 90°. Asymptotic level and the growth rate of PPI increased with increased separation from 15° to 90°, but even the 15° SSwap provided significant PPI for the mean performance of the group. SSwap in Experiment 3 used octave band noise (2–4, 4–8, 8–16, or 16–32 kHz) and separations of 7.5° to 180°. SSwap was most effective for the highest frequencies, with no significant PPI for SSwap below 8–16 kHz, or for separations of 7.5°. In Experiment 4 SSwap had WBN sound levels from 40 to 78 dB SPL, and separations of 22.5°, 45°, 90° and 180°: PPI increased with level, this effect varying with ISI and angular separation. These experiments extend the prior findings on sound localization in mice, and the dependence of PPI on ISI adds a reaction-time-like dimension to this behavioral analysis. PMID:20364886

  8. EEG oscillations entrain their phase to high-level features of speech sound.

    PubMed

    Zoefel, Benedikt; VanRullen, Rufin

    2016-01-01

    Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation-and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed "high-level" speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Effects of sound level fluctuations on annoyance caused by aircraft-flyover noise

    NASA Technical Reports Server (NTRS)

    Mccurdy, D. A.

    1979-01-01

    A laboratory experiment was conducted to determine the effects of variations in the rate and magnitude of sound level fluctuations on the annoyance caused by aircraft-flyover noise. The effects of tonal content, noise duration, and sound pressure level on annoyance were also studied. An aircraft-noise synthesis system was used to synthesize 32 aircraft-flyover noise stimuli representing the factorial combinations of 2 tone conditions, 2 noise durations, 2 sound pressure levels, 2 level fluctuation rates, and 2 level fluctuation magnitudes. Thirty-two test subjects made annoyance judgements on a total of 64 stimuli in a subjective listening test facility simulating an outdoor acoustic environment. Variations in the rate and magnitude of level fluctuations were found to have little, if any, effect on annoyance. Tonal content, noise duration, sound pressure level, and the interaction of tonal content with sound pressure level were found to affect the judged annoyance significantly. The addition of tone corrections and/or duration corrections significantly improved the annoyance prediction ability of noise rating scales.

  10. MP3 player listening sound pressure levels among 10 to 17 year old students.

    PubMed

    Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M

    2011-11-01

    Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤  75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.

  11. Sound levels in a neonatal intensive care unit significantly exceeded recommendations, especially inside incubators.

    PubMed

    Parra, Johanna; de Suremain, Aurelie; Berne Audeoud, Frederique; Ego, Anne; Debillon, Thierry

    2017-12-01

    This study measured sound levels in a 2008 built French neonatal intensive care unit (NICU) and compared them to the 2007 American Academy of Pediatrics (AAP) recommendations. The ultimate aim was to identify factors that could influence noise levels. The study measured sound in 17 single or double rooms in the NICU. Two dosimeters were installed in each room, one inside and one outside the incubators, and these conducted measurements over a 24-hour period. The noise metrics measured were the equivalent continuous sound level (L eq ), the maximum noise level (L max ) and the noise level exceeded for 10% of the measurement period (L 10 ). The mean L eq , L 10 and L max were 60.4, 62.1 and 89.1 decibels (dBA), which exceeded the recommended levels of 45, 50 and 65 dBA (p < 0.001), respectively. The L eq inside the incubator was significantly higher than in the room (+8 dBA, p < 0.001). None of the newborns' characteristics, the environment or medical care was correlated to an increased noise level, except for a postconceptional age below 32 weeks. The sound levels significantly exceeded the AAP recommendations, particularly inside incubators. A multipronged strategy is required to improve the sound environment and protect the neonates' sensory development. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  12. Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech.

    PubMed

    Švec, Jan G; Granqvist, Svante

    2018-03-15

    Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to improve their accuracy and reproducibility. Basic information is put together from standards, technical, voice and speech literature, and practical experience of the authors and is explained for nontechnical readers. Variation of SPL with distance, sound level meters and their accuracy, frequency and time weightings, and background noise topics are reviewed. Several calibration procedures for SPL measurements are described for stand-mounted and head-mounted microphones. SPL of voice and speech should be reported together with the mouth-to-microphone distance so that the levels can be related to vocal power. Sound level measurement settings (i.e., frequency weighting and time weighting/averaging) should always be specified. Classified sound level meters should be used to assure measurement accuracy. Head-mounted microphones placed at the proximity of the mouth improve signal-to-noise ratio and can be taken advantage of for voice SPL measurements when calibrated. Background noise levels should be reported besides the sound levels of voice and speech.

  13. Detection System of Sound Noise Level (SNL) Based on Condenser Microphone Sensor

    NASA Astrophysics Data System (ADS)

    Rajagukguk, Juniastel; Eka Sari, Nurdieni

    2018-03-01

    The research aims to know the noise level by using the Arduino Uno as data processing input from sensors and called as Sound Noise Level (SNL). The working principle of the instrument is as noise detector with the show notifications the noise level on the LCD indicator and in the audiovisual form. Noise detection using the sensor is a condenser microphone and LM 567 as IC op-amps, which are assembled so that it can detect the noise, which sounds are captured by the sensor will turn the tide of sinusoida voice became sine wave energy electricity (altering sinusoida electric current) that is able to responded to complaints by the Arduino Uno. The tool is equipped with a detector consists of a set indicator LED and sound well as the notification from the text on LCD 16*2. Work setting indicators on the condition that, if the measured noise > 75 dB then sound will beep, the red LED will light up indicating the status of the danger. If the measured value on the LCD is higher than 56 dB, sound indicator will be beep and yellow LED will be on indicating noisy. If the noise measured value <55 dB, sound indicator will be quiet indicating peaceful from noisy. From the result of the research can be explained that the SNL is capable to detecting and displaying noise level with a measuring range 50-100 dB and capable to delivering the notification noise in audiovisual.

  14. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    PubMed

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  15. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows

    PubMed Central

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-01

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios—of open, tilted, and closed windows—were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor–indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor–indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows. PMID:29346318

  16. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows.

    PubMed

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Röösli, Martin; Brink, Mark; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-18

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios-of open, tilted, and closed windows-were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor-indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor-indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows.

  17. Tinnitus is associated with reduced sound level tolerance in adolescents with normal audiograms and otoacoustic emissions

    PubMed Central

    Sanchez, Tanit Ganz; Moraes, Fernanda; Casseb, Juliana; Cota, Jaci; Freire, Katya; Roberts, Larry E.

    2016-01-01

    Recent neuroscience research suggests that tinnitus may reflect synaptic loss in the cochlea that does not express in the audiogram but leads to neural changes in auditory pathways that reduce sound level tolerance (SLT). Adolescents (N = 170) completed a questionnaire addressing their prior experience with tinnitus, potentially risky listening habits, and sensitivity to ordinary sounds, followed by psychoacoustic measurements in a sound booth. Among all adolescents 54.7% reported by questionnaire that they had previously experienced tinnitus, while 28.8% heard tinnitus in the booth. Psychoacoustic properties of tinnitus measured in the sound booth corresponded with those of chronic adult tinnitus sufferers. Neither hearing thresholds (≤15 dB HL to 16 kHz) nor otoacoustic emissions discriminated between adolescents reporting or not reporting tinnitus in the sound booth, but loudness discomfort levels (a psychoacoustic measure of SLT) did so, averaging 11.3 dB lower in adolescents experiencing tinnitus in the acoustic chamber. Although risky listening habits were near universal, the teenagers experiencing tinnitus and reduced SLT tended to be more protective of their hearing. Tinnitus and reduced SLT could be early indications of a vulnerability to hidden synaptic injury that is prevalent among adolescents and expressed following exposure to high level environmental sounds. PMID:27265722

  18. Non-auditory health effects among air force crew chiefs exposed to high level sound.

    PubMed

    Jensen, Anker; Lund, Søren Peter; Lücke, Thorsten Høgh; Clausen, Ole Voldum; Svendsen, Jørgen Torp

    2009-01-01

    The possibility of non-auditory health effects in connection with occupational exposure to high level sound is supposed by some researchers, but is still debated. Crew chiefs on airfields are exposed to high-level aircraft sound when working close to aircraft with running engines. We compared their health status with a similar control group who were not subject to this specific sound exposure. Health records of 42 crew chiefs were compared to health records of 42 aircraft mechanics and 17 former crew chiefs. The specific sound exposure of crew chiefs was assessed. The number of reported disease cases was generally small, but generally slightly higher among mechanics than among crew chiefs. Diseases of the ear were more frequent among crew chiefs (not significant). Former crew chiefs reported fewer diseases of the ear and more airways infections (both significant). The sound exposure during launch was up to 144 dB (peak) and 124 dB (L(eq) ), but for limited time. The study did not reveal a higher disease frequency in general among crew chiefs. However, it did reveal a tendency to ear diseases, possibly due to their exposure to high-level sound.

  19. Matched Behavioral and Neural Adaptations for Low Sound Level Echolocation in a Gleaning Bat, Antrozous pallidus.

    PubMed

    Measor, Kevin R; Leavell, Brian C; Brewton, Dustin H; Rumschlag, Jeffrey; Barber, Jesse R; Razak, Khaleel A

    2017-01-01

    In active sensing, animals make motor adjustments to match sensory inputs to specialized neural circuitry. Here, we describe an active sensing system for sound level processing. The pallid bat uses downward frequency-modulated (FM) sweeps as echolocation calls for general orientation and obstacle avoidance. The bat's auditory cortex contains a region selective for these FM sweeps (FM sweep-selective region, FMSR). We show that the vast majority of FMSR neurons are sensitive and strongly selective for relatively low levels (30-60 dB SPL). Behavioral testing shows that when a flying bat approaches a target, it reduces output call levels to keep echo levels between ∼30 and 55 dB SPL. Thus, the pallid bat behaviorally matches echo levels to an optimized neural representation of sound levels. FMSR neurons are more selective for sound levels of FM sweeps than tones, suggesting that across-frequency integration enhances level tuning. Level-dependent timing of high-frequency sideband inhibition in the receptive field shapes increased level selectivity for FM sweeps. Together with previous studies, these data indicate that the same receptive field properties shape multiple filters (sweep direction, rate, and level) for FM sweeps, a sound common in multiple vocalizations, including human speech. The matched behavioral and neural adaptations for low-intensity echolocation in the pallid bat will facilitate foraging with reduced probability of acoustic detection by prey.

  20. Matched Behavioral and Neural Adaptations for Low Sound Level Echolocation in a Gleaning Bat, Antrozous pallidus

    PubMed Central

    Measor, Kevin R.; Leavell, Brian C.; Brewton, Dustin H.; Rumschlag, Jeffrey; Barber, Jesse R.

    2017-01-01

    Abstract In active sensing, animals make motor adjustments to match sensory inputs to specialized neural circuitry. Here, we describe an active sensing system for sound level processing. The pallid bat uses downward frequency-modulated (FM) sweeps as echolocation calls for general orientation and obstacle avoidance. The bat’s auditory cortex contains a region selective for these FM sweeps (FM sweep-selective region, FMSR). We show that the vast majority of FMSR neurons are sensitive and strongly selective for relatively low levels (30-60 dB SPL). Behavioral testing shows that when a flying bat approaches a target, it reduces output call levels to keep echo levels between ∼30 and 55 dB SPL. Thus, the pallid bat behaviorally matches echo levels to an optimized neural representation of sound levels. FMSR neurons are more selective for sound levels of FM sweeps than tones, suggesting that across-frequency integration enhances level tuning. Level-dependent timing of high-frequency sideband inhibition in the receptive field shapes increased level selectivity for FM sweeps. Together with previous studies, these data indicate that the same receptive field properties shape multiple filters (sweep direction, rate, and level) for FM sweeps, a sound common in multiple vocalizations, including human speech. The matched behavioral and neural adaptations for low-intensity echolocation in the pallid bat will facilitate foraging with reduced probability of acoustic detection by prey. PMID:28275715

  1. Four odontocete species change hearing levels when warned of impending loud sound.

    PubMed

    Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A

    2018-03-01

    Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  2. High Level Impulse Sounds and Human Hearing: Standards, Physiology, Quantification

    DTIC Science & Technology

    2012-05-01

    a result of this change the piston-like movements of the stapes are replaced by a tilting action, which is much less effective in pushing cochlear ...Above this threshold, high noise levels result in a turbulent flow of air through the nonlinear element of the protector, effectively dissipating the...electrical diagrams of earplug and earmuff models (Kalb, 2011). In the model shown, the energy flow through the HPD propagates along three parallel

  3. An analysis of collegiate band directors' exposure to sound pressure levels

    NASA Astrophysics Data System (ADS)

    Roebuck, Nikole Moore

    Noise-induced hearing loss (NIHL) is a significant but unfortunate common occupational hazard. The purpose of the current study was to measure the magnitude of sound pressure levels generated within a collegiate band room and determine if those sound pressure levels are of a magnitude that exceeds the policy standards and recommendations of the Occupational Safety and Health Administration (OSHA), and the National Institute of Occupational Safety and Health (NIOSH). In addition, reverberation times were measured and analyzed in order to determine the appropriateness of acoustical conditions for the band rehearsal environment. Sound pressure measurements were taken from the rehearsal of seven collegiate marching bands. Single sample t test were conducted to compare the sound pressure levels of all bands to the noise exposure standards of OSHA and NIOSH. Multiple regression analysis were conducted and analyzed in order to determine the effect of the band room's conditions on the sound pressure levels and reverberation times. Time weighted averages (TWA), noise percentage doses, and peak levels were also collected. The mean Leq for all band directors was 90.5 dBA. The total accumulated noise percentage dose for all band directors was 77.6% of the maximum allowable daily noise dose under the OSHA standard. The total calculated TWA for all band directors was 88.2% of the maximum allowable daily noise dose under the OSHA standard. The total accumulated noise percentage dose for all band directors was 152.1% of the maximum allowable daily noise dose under the NIOSH standards, and the total calculated TWA for all band directors was 93dBA of the maximum allowable daily noise dose under the NIOSH standard. Multiple regression analysis revealed that the room volume, the level of acoustical treatment and the mean room reverberation time predicted 80% of the variance in sound pressure levels in this study.

  4. Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech

    ERIC Educational Resources Information Center

    Švec, Jan G.; Granqvist, Svante

    2018-01-01

    Purpose: Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to…

  5. Diversity in sound pressure levels and estimated active space of resident killer whale vocalizations.

    PubMed

    Miller, Patrick J O

    2006-05-01

    Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.

  6. Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.

    PubMed

    Cummings, W C; Holliday, D V

    1987-09-01

    Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.

  7. Physical activity levels of overweight or obese breast cancer survivors: Correlates at entry into a weight loss intervention study

    PubMed Central

    Liu, Fred X.; Flatt, Shirley W.; Pakiz, Bilgé; Sedjo, Rebecca L.; Wolin, Kathleen Y.; Blair, Cindy K.; Demark-Wahnefried, Wendy; Rock, Cheryl L.

    2015-01-01

    Purpose Physical activity is associated with reduced risk and progression of breast cancer, and exercise can improve physical function, quality of life and fatigue in cancer survivors. Evidence on factors associated with cancer survivors’ adherence to physical activity guidelines from the American Cancer Society and the U.S. Department of Health and Human Services is mixed. This study seeks to help fill this gap in knowledge by examining correlates with physical activity among breast cancer survivors. Methods Overweight or obese breast cancer survivors (N=692) were examined at enrollment into a weight loss intervention study. Questionnaires and medical record review ascertained data on education, race, ethnicity, menopausal status, physical activity, and medical history. Measures of anthropometrics and fitness level were conducted. Regression analysis examined associations between physical activity and demographic, clinical, and lifestyle factors. Results Overall, 23% of women met current guidelines. Multivariate analysis revealed that body mass index (p=0.03), emergency room visits in the past year (p=0.04), and number of co-morbidities (p=0.02) were associated with less physical activity. Geographic region also was associated with level of physical activity (p=0.02), with women in Alabama reporting significantly less activity than those in other participating regions. Conclusions The majority of overweight/obese breast cancer survivors did not meet physical activity recommendations. Physical activity levels were associated with degree of adiposity, geographic location, and number of co-morbidities. The majority of overweight breast cancer survivors should be encouraged to increase their level of physical activity. Individualizing exercise prescriptions according to medical co-morbidities may improve adherence. PMID:25975675

  8. Predicted and Measured Modal Sound Power Levels for a Fan Ingesting Distorted Inflow

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    2010-01-01

    Refinements have been made to a method for estimating the modal sound power levels of a ducted fan ingesting distorted inflow. By assuming that each propagating circumferential mode consists only of a single radial mode (the one with the highest cut-off ratio), circumferential mode sound power levels can be computed for a variety of inflow distortion patterns and operating speeds. Predictions from the refined theory have been compared to data from an experiment conducted in the Advanced Noise Control Fan at NASA Glenn Research Center. The inflow to the fan was distorted by inserting cylindrical rods radially into the inlet duct. The rods were placed at an axial location one rotor chord length upstream of the fan and arranged in both regular and irregular circumferential patterns. The fan was operated at 2000, 1800, and 1400 rpm. Acoustic pressure levels were measured in the fan inlet and exhaust ducts using the Rotating Rake fan mode measurement system. Far field sound pressure levels were also measured. It is shown that predicted trends in circumferential mode sound power levels closely match the experimental data for all operating speeds and distortion configurations tested. Insight gained through this work is being used to develop more advanced tools for predicting fan inflow distortion tone noise levels.

  9. The influence of the level formants on the perception of synthetic vowel sounds

    NASA Astrophysics Data System (ADS)

    Kubzdela, Henryk; Owsianny, Mariuz

    A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.

  10. Stapes Displacement and Intracochlear Pressure in Response to Very High Level, Low Frequency Sounds

    PubMed Central

    Greene, Nathaniel T.; Jenkins, Herman A.; Tollin, Daniel J.; Easter, James R.

    2018-01-01

    The stapes is held in the oval window by the stapedial annular ligament (SAL), which restricts total peak-to-peak displacement of the stapes. Previous studies have suggested that for moderate (< 130 dB SPL) sound levels intracochlear pressure (PIC), measured at the base of the cochlea far from the basilar membrane, increases directly proportionally with stapes displacement (DStap), thus a current model of impulse noise exposure (the Auditory Hazard Assessment Algorithm for Humans, or AHAAH) predicts that peak PIC will vary linearly with DStap up to some saturation point. However, no direct tests of DStap, or of the relationship with PIC during such motion, have been performed during acoustic stimulation of the human ear. In order to examine the relationship between DStap and PIC to very high level sounds, measurements of DStap and PIC were made in cadaveric human temporal bones. Specimens were prepared by mastoidectomy and extended facial recess to expose the ossicular chain. Measurements of PIC were made in scala vestibuli (PSV) and scala tympani (PST), along with the SPL in the external auditory canal (PEAC), concurrently with laser Doppler vibrometry (LDV) measurements of stapes velocity (VStap). Stimuli were moderate (~100 dB SPL) to very high level (up to ~170 dB SPL), low frequency tones (20–2560 Hz). Both DStap and PSV increased proportionally with sound pressure level in the ear canal up to approximately ~150 dB SPL, above which both DStap and PSV showed a distinct deviation from proportionality with PEAC. Both DStap and PSV approached saturation: DStap at a value exceeding 150 μm, which is substantially higher than has been reported for small mammals, while PSV showed substantial frequency dependence in the saturation point. The relationship between PSV and DStap remained constant, and cochlear input impedance did not vary across the levels tested, consistent with prior measurements at lower sound levels. These results suggest that PSV sound pressure

  11. Stapes displacement and intracochlear pressure in response to very high level, low frequency sounds.

    PubMed

    Greene, Nathaniel T; Jenkins, Herman A; Tollin, Daniel J; Easter, James R

    2017-05-01

    The stapes is held in the oval window by the stapedial annular ligament (SAL), which restricts total peak-to-peak displacement of the stapes. Previous studies have suggested that for moderate (<130 dB SPL) sound levels intracochlear pressure (P IC ), measured at the base of the cochlea far from the basilar membrane, increases directly proportionally with stapes displacement (D Stap ), thus a current model of impulse noise exposure (the Auditory Hazard Assessment Algorithm for Humans, or AHAAH) predicts that peak P IC will vary linearly with D Stap up to some saturation point. However, no direct tests of D Stap , or of the relationship with P IC during such motion, have been performed during acoustic stimulation of the human ear. In order to examine the relationship between D Stap and P IC to very high level sounds, measurements of D Stap and P IC were made in cadaveric human temporal bones. Specimens were prepared by mastoidectomy and extended facial recess to expose the ossicular chain. Measurements of P IC were made in scala vestibuli (P SV ) and scala tympani (P ST ), along with the SPL in the external auditory canal (P EAC ), concurrently with laser Doppler vibrometry (LDV) measurements of stapes velocity (V Stap ). Stimuli were moderate (∼100 dB SPL) to very high level (up to ∼170 dB SPL), low frequency tones (20-2560 Hz). Both D Stap and P SV increased proportionally with sound pressure level in the ear canal up to approximately ∼150 dB SPL, above which both D Stap and P SV showed a distinct deviation from proportionality with P EAC . Both D Stap and P SV approached saturation: D Stap at a value exceeding 150 μm, which is substantially higher than has been reported for small mammals, while P SV showed substantial frequency dependence in the saturation point. The relationship between P SV and D Stap remained constant, and cochlear input impedance did not vary across the levels tested, consistent with prior measurements at lower sound levels. These

  12. Development of neural responsivity to vocal sounds in higher level auditory cortex of songbirds

    PubMed Central

    Miller-Sims, Vanessa C.

    2014-01-01

    Like humans, songbirds learn vocal sounds from “tutors” during a sensitive period of development. Vocal learning in songbirds therefore provides a powerful model system for investigating neural mechanisms by which memories of learned vocal sounds are stored. This study examined whether NCM (caudo-medial nidopallium), a region of higher level auditory cortex in songbirds, serves as a locus where a neural memory of tutor sounds is acquired during early stages of vocal learning. NCM neurons respond well to complex auditory stimuli, and evoked activity in many NCM neurons habituates such that the response to a stimulus that is heard repeatedly decreases to approximately one-half its original level (stimulus-specific adaptation). The rate of neural habituation serves as an index of familiarity, being low for familiar sounds, but high for novel sounds. We found that response strength across different song stimuli was higher in NCM neurons of adult zebra finches than in juveniles, and that only adult NCM responded selectively to tutor song. The rate of habituation across both tutor song and novel conspecific songs was lower in adult than in juvenile NCM, indicating higher familiarity and a more persistent response to song stimuli in adults. In juvenile birds that have memorized tutor vocal sounds, neural habituation was higher for tutor song than for a familiar conspecific song. This unexpected result suggests that the response to tutor song in NCM at this age may be subject to top-down influences that maintain the tutor song as a salient stimulus, despite its high level of familiarity. PMID:24694936

  13. 20 Years of sea-levels, accretion, and vegetation on two Long Island Sound salt marshes

    EPA Science Inventory

    The long-term 1939-2013 rate of RSLR (Relative Sea-Level Rise) at the New London, CT tide gauge is ~2.6 mm/yr, near the maximum rate of salt marsh accretion reported in eastern Long Island Sound salt marshes. Consistent with recent literature RSLR at New London has accelerated si...

  14. The Measurement of the Oral and Nasal Sound Pressure Levels of Speech

    ERIC Educational Resources Information Center

    Clarke, Wayne M.

    1975-01-01

    A nasal separator was used to measure the oral and nasal components in the speech of a normal adult Australian population. Results indicated no difference in oral and nasal sound pressure levels for read versus spontaneous speech samples; however, females tended to have a higher nasal component than did males. (Author/TL)

  15. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  16. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  17. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  18. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 5 2014-10-01 2014-10-01 false Location and operation of sound level measurement systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL REGULATIONS COMPLIANCE WITH INTERSTATE MOTOR...

  19. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the vehicle at an angle that is consistent with the recommendation of the system's manufacturer. If... systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The...

  20. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... recommendation of the manufacturer of the sound level measurement system. (2) In no case shall the holder or... angle that is consistent with the recommendation of the system's manufacturer. If the manufacturer of... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...

  1. A comparison of radiosity with current methods of sound level prediction in commercial spaces

    NASA Astrophysics Data System (ADS)

    Beamer, C. Walter, IV; Muehleisen, Ralph T.

    2002-11-01

    The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.

  2. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  3. Noise levels in neonatal intensive care unit and use of sound absorbing panel in the isolette.

    PubMed

    Altuncu, E; Akman, I; Kulekci, S; Akdas, F; Bilgen, H; Ozek, E

    2009-07-01

    The purposes of this study were to measure the noise level of a busy neonatal intensive care unit (NICU) and to determine the effect of sound absorbing panel (SAP) on the level of noise inside the isolette. The sound pressure levels (SPL) of background noise, baby crying, alarms and closing of isolette's door/portholes were measured by a 2235-Brüel&Kjaer Sound Level Meter. Readings were repeated after applying SAP (3D pyramidal shaped open cell polyurethane foam) to the three lateral walls and ceiling of the isolette. The median SPL of background noise inside the NICU was 56dBA and it decreased to 47dBA inside the isolette. The median SPL of monitor alarms and baby crying inside the isolette were not different than SPL measured under radiant warmer (p>0.05). With SAP, the median SPL of temperature alarm inside the isolette decreased significantly from 82 to 72dBA, monitor alarm from 64 to 56dBA, porthole closing from 81 to 74dBA, and isolette door closing from 80 to 68dBA (p<0.01). There was a significant reduction in the noise produced by baby crying when SAP was used in the isolette (79dBA vs 69dBA, respectively) (p<0.0001). There was also significant attenuation effect of panel on the environmental noise. The noise level in our NICU is significantly above the universally recommended levels. Being inside the isolette protects infants from noise sources produced outside the isolette. However, very high noises are produced inside the isolette as well. Sound absorbing panel can be a simple solution and it attenuated the noise levels inside the isolette.

  4. Sound Pressure Levels Measured in a University Concert Band: A Risk of Noise-Induced Hearing Loss?

    ERIC Educational Resources Information Center

    Holland, Nicholas V., III

    2008-01-01

    Researchers have reported public school band directors as experiencing noise-induced hearing loss. Little research has focused on collegiate band directors and university student musicians. The present study measures the sound pressure levels generated within a university concert band and compares sound levels with the criteria set by the…

  5. Excessive exposure of sick neonates to sound during transport

    PubMed Central

    Buckland, L; Austin, N; Jackson, A; Inder, T

    2003-01-01

    Objective: To determine the levels of sound to which infants are exposed during routine transport by ambulance, aircraft, and helicopter. Design: Sound levels during 38 consecutive journeys from a regional level III neonatal intensive care unit were recorded using a calibrated data logging sound meter (Quest 2900). The meter was set to record "A" weighted slow response integrated sound levels, which emulates the response of the human ear, and "C" weighted response sound levels as a measure of total sound level exposure for all frequencies. The information was downloaded to a computer using MS HyperTerminal. The resulting data were stored, and a graphical profile was generated for each journey using SigmaPlot software. Setting: Eight journeys involved ambulance transport on country roads, 24 involved fixed wing aircraft, and four were by helicopter. Main outcome measures: Relations between decibel levels and events or changes in transport mode were established by correlating the time logged on the sound meter with the standard transport documentation sheet. Results: The highest sound levels were recorded during air transport. However, mean sound levels for all modes of transport exceeded the recommended levels for neonatal intensive care. The maximum sound levels recorded were extremely high at greater than 80 dB in the "A" weighted hearing range and greater than 120 dB in the total frequency range. Conclusions: This study raises major concerns about the excessive exposure of the sick newborn to sound during transportation. PMID:14602701

  6. Pre-slaughter sound levels and pre-slaughter handling from loading at the farm till slaughter influence pork quality.

    PubMed

    Vermeulen, L; Van de Perre, V; Permentier, L; De Bie, S; Verbeke, G; Geers, R

    2016-06-01

    This study investigates the relationship between sound levels, pre-slaughter handling during loading and pork quality. Pre-slaughter variables were investigated from loading till slaughter. A total of 3213 pigs were measured 30 min post-mortem for pH(30LT) (M. Longissimus thoracis). First, a sound level model for the risk to develop PSE meat was established. The difference in maximum and mean sound level during loading, mean sound level during lairage and mean sound level prior to stunning remained significant within the model. This indicated that sound levels during loading had a significant added value to former sound models. Moreover, this study completed the global classification checklist (Vermeulen et al., 2015a) by developing a linear mixed model for pH(30LT) and PSE prevalence, with the difference in maximum and mean sound level measured during loading, the feed withdrawal period and the difference in temperature during loading and lairage. Hence, this study provided new insights over previous research where loading procedures were not included. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Factors affecting measured aircraft sound levels in the vicinity of start-of-takeoff roll

    NASA Astrophysics Data System (ADS)

    Richard, Horonjeff; Fleming, Gregg G.; Rickley, Edward J.; Connor, Thomas L.

    This paper presents the findings of a recently conducted measurement and analysis program of jet transport aircraft sound levels in the vicinity of the star-of-takeoff roll. The purpose of the program was two-fold: (1) to evaluate the computational accuracy of the Federal Aviation Administration's Integrated Noise Model (INM) in the vicinity of start-of-takeoff roll with a recently updated database (INM 3.10), and (2) to provide guidance for future model improvements. Focusing on the second of these two goals, this paper examines several factors affecting Sound Exposure Levels (SELs) in the hemicircular area behind the aircraft brake release point at the start-of-takeoff. In addition to the aircraft type itself, these factors included the geometric relationship of the measurement site to the runway, the wind velocity (speed and direction), aircraft grow weight, and start-of-roll mode (static or rolling start).

  8. [Preventive effects of sound insulation windows on the indoor noise levels in a street residential building in Beijing].

    PubMed

    Guo, Bin; Huang, Jing; Guo, Xin-biao

    2015-06-18

    To evaluate the preventive effects of sound insulation windows on traffic noise. Indoor noise levels of the residential rooms (on both the North 4th ring road side and the campus side) with closed sound insulation windows were measured using the sound level meter, and comparisons with the simultaneously measured outdoor noise levels were made. In addition, differences of indoor noise levels between rooms with closed sound insulation windows and open sound insulation windows were also compared. The average outdoor noise levels of the North 4th ring road was higher than 70 dB(A), which exceeded the limitation stated in the "Environmental Quality Standard for Noise" (GB 3096-2008) in our country. However, with the sound insulation windows closed, the indoor noise levels reduced significantly to the level under 35 dB(A) (P<0.05), which complied with the indoor noise level standards in our country. The closed or open states of the sound insulation windows had significant influence on the indoor noise levels (P<0.05). Compared with the open state of the sound insulation window, when the sound insulation windows were closed, the indoor noise levels reduced 18.8 dB(A) and 8.3 dB(A) in residential rooms facing North 4th ring road side and campus side, respectively. The results indicated that installation of insulation windows had significant noise reduction effects on street residential buildings especially on the rooms facing major traffic roads. Installation of the sound insulation windows has significant preventive effects on indoor noise in the street residential building.

  9. Behind Start of Take-Off Roll Aircraft Sound Level Directivity Study - Revision 1

    NASA Technical Reports Server (NTRS)

    Lau, Michael C.; Roof, Christopher J.; Fleming, Gregg G.; Rapoza, Amanda S.; Boeker, Eric R.; McCurdy, David A.; Shepherd, Kevin P.

    2015-01-01

    The National Aeronautics and Space Administration (NASA), Langley Research Center (LaRC) and the Environmental Measurement and Modeling Division of the Department of Transportation's Volpe National Transportation Systems Center (Volpe) conducted a noise measurement study to examine aircraft sound level directivity patterns behind the start-of-takeoff roll. The study was conducted at Washington Dulles International Airport (IAD) from October 4 through 20, 2004.

  10. Acceptable range of speech level in noisy sound fields for young adults and elderly persons.

    PubMed

    Sato, Hayato; Morimoto, Masayuki; Ota, Ryo

    2011-09-01

    The acceptable range of speech level as a function of background noise level was investigated on the basis of word intelligibility scores and listening difficulty ratings. In the present study, the acceptable range is defined as the range that maximizes word intelligibility scores and simultaneously does not cause a significant increase in listening difficulty ratings from the minimum ratings. Listening tests with young adult and elderly listeners demonstrated the following. (1) The acceptable range of speech level for elderly listeners overlapped that for young listeners. (2) The lower limit of the acceptable speech level for both young and elderly listeners was 65 dB (A-weighted) for noise levels of 40 and 45 dB (A-weighted), a level with a speech-to-noise ratio of +15 dB for noise levels of 50 and 55 dB, and a level with a speech-to-noise ratio of +10 dB for noise levels from 60 to 70 dB. (3) The upper limit of the acceptable speech level for both young and elderly listeners was 80 dB for noise levels from 40 to 55 dB and 85 dB or above for noise levels from 55 to 70 dB. © 2011 Acoustical Society of America

  11. A geospatial model of ambient sound pressure levels in the contiguous United States.

    PubMed

    Mennitt, Daniel; Sherrill, Kirk; Fristrup, Kurt

    2014-05-01

    This paper presents a model that predicts measured sound pressure levels using geospatial features such as topography, climate, hydrology, and anthropogenic activity. The model utilizes random forest, a tree-based machine learning algorithm, which does not incorporate a priori knowledge of source characteristics or propagation mechanics. The response data encompasses 270 000 h of acoustical measurements from 190 sites located in National Parks across the contiguous United States. The explanatory variables were derived from national geospatial data layers and cross validation procedures were used to evaluate model performance and identify variables with predictive power. Using the model, the effects of individual explanatory variables on sound pressure level were isolated and quantified to reveal systematic trends across environmental gradients. Model performance varies by the acoustical metric of interest; the seasonal L50 can be predicted with a median absolute deviation of approximately 3 dB. The primary application for this model is to generalize point measurements to maps expressing spatial variation in ambient sound levels. An example of this mapping capability is presented for Zion National Park and Cedar Breaks National Monument in southwestern Utah.

  12. Noise trauma induced by a mousetrap--sound pressure level measurement of vole captive bolt devices.

    PubMed

    Frank, Matthias; Napp, Matthias; Lange, Joern; Grossjohann, Rico; Ekkernkamp, Axel; Beule, Achim G

    2010-05-01

    While ballistic parameters of vole captive bolt devices have been reported, there is no investigation on their hazardous potential to cause noise trauma. The aim of this experimental study was to measure the sound pressure levels of vole captive bolt devices. Two different shooting devices were examined with a modular precision sound level meter on an outdoor firing range. Measurements were taken in a semi-circular configuration with measuring points 0 degrees in front of the muzzle, 90 degrees at right angle of the muzzle, and 180 degrees behind the shooting device. Distances between muzzle and microphone were 0.5, 1, 2, 10, and 20 m. Sound pressure levels exceeded 130 dB(C) at any measuring point within the 20-m area. Highest measurements (more than 172 dB[C]) were taken in the 0 degrees direction at the 0.5-m distance for both shooting devices proving the hazardous potential of these gadgets to cause noise trauma.

  13. Influence of the steady background turbulence level on second sound dynamics in He II II

    NASA Astrophysics Data System (ADS)

    Dalban-Canassy, M.; Hilton, D. K.; Sciver, S. W. Van

    2007-01-01

    We report complementary results to our previous publication [Dalban-Canassy M, Hilton DK, Van Sciver SW. Influence of the steady background turbulence level on second sound dynamics in He II. Adv Cryo Eng 2006;51:371-8], both of which are aimed at determining the influence of background turbulence on the breakpoint energy of second sound pulses in He II. The apparatus consists of a channel 175 mm long and 242 mm 2 in cross section immersed in a saturated bath of He II at 1.7 K. A heater at the bottom end generates both background turbulence, through a low level steady heat flux (up to qs = 2.6 kW/m 2), and high intensity square second sound pulses ( qp = 100 or 200 kW/m 2) of variable duration Δ t0 (up to 1 ms). Two superconducting filament sensors, located 25.4 mm and 127 mm above the heater, measure the temperature profiles of the traveling pulses. We present here an analysis of the measurements gathered on the top sensor, and compare them to similar results for the bottom sensor [1]. The strong dependence of the breakpoint energy on the background heat flux previously illustrated is also observed on the top sensor. The present work shows that the ratio of energy received at the top sensor to that at the bottom sensor diminishes with increasing background heat flux.

  14. Exploratory investigation of sound pressure level in the wake of an oscillating airfoil in the vicinity of stall

    NASA Technical Reports Server (NTRS)

    Gray, R. B.; Pierce, G. A.

    1972-01-01

    Wind tunnel tests were performed on two oscillating two-dimensional lifting surfaces. The first of these models had an NACA 0012 airfoil section while the second simulated the classical flat plate. Both of these models had a mean angle of attack of 12 degrees while being oscillated in pitch about their midchord with a double amplitude of 6 degrees. Wake surveys of sound pressure level were made over a frequency range from 16 to 32 Hz and at various free stream velocities up to 100 ft/sec. The sound pressure level spectrum indicated significant peaks in sound intensity at the oscillation frequency and its first harmonic near the wake of both models. From a comparison of these data with that of a sound level meter, it is concluded that most of the sound intensity is contained within these peaks and no appreciable peaks occur at higher harmonics. It is concluded that within the wake the sound intensity is largely pseudosound while at one chord length outside the wake, it is largely true vortex sound. For both the airfoil and flat plate the peaks appear to be more strongly dependent upon the airspeed than on the oscillation frequency. Therefore reduced frequency does not appear to be a significant parameter in the generation of wake sound intensity.

  15. Sound level-dependent growth of N1m amplitude with low and high-frequency tones.

    PubMed

    Soeta, Yoshiharu; Nakagawa, Seiji

    2009-04-22

    The aim of this study was to determine whether the amplitude and/or latency of the N1m deflection of auditory-evoked magnetic fields are influenced by the level and frequency of sound. The results indicated that the amplitude of the N1m increased with sound level. The growth in amplitude with increasing sound level was almost constant with low frequencies (250-1000 Hz); however, this growth decreased with high frequencies (>2000 Hz). The behavior of the amplitude may reflect a difference in the increase in the activation of the peripheral and/or central auditory systems.

  16. Using Clinically Accessible Tools to Measure Sound Levels and Sleep Disruption in the ICU: A Prospective Multicenter Observational Study.

    PubMed

    Litton, Edward; Elliott, Rosalind; Thompson, Kelly; Watts, Nicola; Seppelt, Ian; Webb, Steven A R

    2017-06-01

    To use clinically accessible tools to determine unit-level and individual patient factors associated with sound levels and sleep disruption in a range of representative ICUs. A cross-sectional, observational study. Australian and New Zealand ICUs. All patients 16 years or over occupying an ICU bed on one of two Point Prevalence study days in 2015. Ambient sound was measured for 1 minute using an application downloaded to a personal mobile device. Bedside nurses also recorded the total time and number of awakening for each patient overnight. The study included 539 participants with sound level recorded using an application downloaded to a personal mobile device from 39 ICUs. Maximum and mean sound levels were 78 dB (SD, 9) and 62 dB (SD, 8), respectively. Maximum sound levels were higher in ICUs with a sleep policy or protocol compared with those without maximum sound levels 81 dB (95% CI, 79-83) versus 77 dB (95% CI, 77-78), mean difference 4 dB (95% CI, 0-2), p < 0.001. There was no significant difference in sound levels regardless of single room occupancy, mechanical ventilation status, or illness severity. Clinical nursing staff in all 39 ICUs were able to record sleep assessment in 15-minute intervals. The median time awake and number of prolonged disruptions were 3 hours (interquartile range, 1-4) and three (interquartile range, 2-5), respectively. Across a large number of ICUs, patients were exposed to high sound levels and substantial sleep disruption irrespective of factors including previous implementation of a sleep policy. Sound and sleep measurement using simple and accessible tools can facilitate future studies and could feasibly be implemented into clinical practice.

  17. Assessing Acoustic Sound Levels Associated with Active Source Seismic Surveys in Shallow Marine Environments

    NASA Astrophysics Data System (ADS)

    Bohnenstiehl, D. R.; Tolstoy, M.; Thode, A.; Diebold, J. B.; Webb, S. C.

    2004-12-01

    The potential effect of active source seismic research on marine mammal populations is a topic of increasing concern, and controversy surrounding such operations has begun to impact the planning and permitting of academic surveys [e.g., Malakoff, 2002 Science]. Although no causal relationship between marine mammal strandings and seismic exploration has been proven, any circumstantial evidence must be thoroughly investigated. A 2002 stranding of two beaked whales in the Gulf of California within 50 km of a R/V Ewing seismic survey has been a subject of concern for both marine seismologists and environmentalists. In order to better understand possible received levels for whales in the vicinity of these operations, modeling is combined with ground-truth calibration measurements. A wide-angle parabolic equation model, which is capable of including shear within the sediment and basement layers, is used to generate predictive models of low-frequency transmission loss within the Gulf of California. This work incorporates range-dependent bathymetry, sediment thickness, sound velocity structure and sub-bottom properties. Oceanic sounds speed profiles are derived from the U.S. Navy's seasonal GDEM model and sediment thicknesses are taken from NOAA's worldwide database. The spectral content of the Ewing's 20-airgun seismic array is constrained by field calibration in the spring of 2003 [Tolstoy et al., 2004 GRL], indicating peak energies at frequencies below a few hundred Hz, with energy spectral density showing an approximate power-law decrease at higher frequencies (being ~40 dB below peak at 1 kHz). Transmission loss is estimated along a series of radials extending from multiple positions along the ship's track, with the directivity of the array accounted for by phase-shifting point sources that are scaled by the cube root of the individual airgun volumes. This allows the time-space history of low-frequency received levels to be reconstructed within the Gulf of California

  18. Concerns of the Institute of Transport Study and Research for reducing the sound level inside completely repaired buses. [noise and vibration control

    NASA Technical Reports Server (NTRS)

    Groza, A.; Calciu, J.; Nicola, I.; Ionasek, A.

    1974-01-01

    Sound level measurements on noise sources on buses are used to observe the effects of attenuating acoustic pressure levels inside the bus by sound-proofing during complete repair. A spectral analysis of the sound level as a function of motor speed, bus speed along the road, and the category of the road is reported.

  19. Topography of sound level representation in the FM sweep selective region of the pallid bat auditory cortex.

    PubMed

    Measor, Kevin; Yarrow, Stuart; Razak, Khaleel A

    2018-05-26

    Sound level processing is a fundamental function of the auditory system. To determine how the cortex represents sound level, it is important to quantify how changes in level alter the spatiotemporal structure of cortical ensemble activity. This is particularly true for echolocating bats that have control over, and often rapidly adjust, call level to actively change echo level. To understand how cortical activity may change with sound level, here we mapped response rate and latency changes with sound level in the auditory cortex of the pallid bat. The pallid bat uses a 60-30 kHz downward frequency modulated (FM) sweep for echolocation. Neurons tuned to frequencies between 30 and 70 kHz in the auditory cortex are selective for the properties of FM sweeps used in echolocation forming the FM sweep selective region (FMSR). The FMSR is strongly selective for sound level between 30 and 50 dB SPL. Here we mapped the topography of level selectivity in the FMSR using downward FM sweeps and show that neurons with more monotonic rate level functions are located in caudomedial regions of the FMSR overlapping with high frequency (50-60 kHz) neurons. Non-monotonic neurons dominate the FMSR, and are distributed across the entire region, but there is no evidence for amplitopy. We also examined how first spike latency of FMSR neurons change with sound level. The majority of FMSR neurons exhibit paradoxical latency shift wherein the latency increases with sound level. Moreover, neurons with paradoxical latency shifts are more strongly level selective and are tuned to lower sound level than neurons in which latencies decrease with level. These data indicate a clustered arrangement of neurons according to monotonicity, with no strong evidence for finer scale topography, in the FMSR. The latency analysis suggests mechanisms for strong level selectivity that is based on relative timing of excitatory and inhibitory inputs. Taken together, these data suggest how the spatiotemporal

  20. Distribution of standing-wave errors in real-ear sound-level measurements.

    PubMed

    Richmond, Susan A; Kopun, Judy G; Neely, Stephen T; Tan, Hongyang; Gorga, Michael P

    2011-05-01

    Standing waves can cause measurement errors when sound-pressure level (SPL) measurements are performed in a closed ear canal, e.g., during probe-microphone system calibration for distortion-product otoacoustic emission (DPOAE) testing. Alternative calibration methods, such as forward-pressure level (FPL), minimize the influence of standing waves by calculating the forward-going sound waves separate from the reflections that cause errors. Previous research compared test performance (Burke et al., 2010) and threshold prediction (Rogers et al., 2010) using SPL and multiple FPL calibration conditions, and surprisingly found no significant improvements when using FPL relative to SPL, except at 8 kHz. The present study examined the calibration data collected by Burke et al. and Rogers et al. from 155 human subjects in order to describe the frequency location and magnitude of standing-wave pressure minima to see if these errors might explain trends in test performance. Results indicate that while individual results varied widely, pressure variability was larger around 4 kHz and smaller at 8 kHz, consistent with the dimensions of the adult ear canal. The present data suggest that standing-wave errors are not responsible for the historically poor (8 kHz) or good (4 kHz) performance of DPOAE measures at specific test frequencies.

  1. Lateral attenuation of aircraft sound levels over an acoustically hard water surface : Logan airport study

    DOT National Transportation Integrated Search

    2002-01-31

    Accurate modeling of the lateral attenuation of sound is : essential for accurate prediction of aircraft noise. Lateral : attenuation contains many aspects of sound generation and : propagation, including ground effects (sometimes referred to :...

  2. Sources and Levels of Ambient Ocean Sound near the Antarctic Peninsula

    PubMed Central

    Dziak, Robert P.; Bohnenstiehl, DelWayne R.; Stafford, Kathleen M.; Matsumoto, Haruyoshi; Park, Minkyu; Lee, Won Sang; Fowler, Matt J.; Lau, Tai-Kwan; Haxel, Joseph H.; Mellinger, David K.

    2015-01-01

    Arrays of hydrophones were deployed within the Bransfield Strait and Scotia Sea (Antarctic Peninsula region) from 2005 to 2009 to record ambient ocean sound at frequencies of up to 125 and 500 Hz. Icequakes, which are broadband, short duration signals derived from fracturing of large free-floating icebergs, are a prominent feature of the ocean soundscape. Icequake activity peaks during austral summer and is minimum during winter, likely following freeze-thaw cycles. Iceberg grounding and rapid disintegration also releases significant acoustic energy, equivalent to large-scale geophysical events. Overall ambient sound levels can be as much as ~10–20 dB higher in the open, deep ocean of the Scotia Sea compared to the relatively shallow Bransfield Strait. Noise levels become lowest during the austral winter, as sea-ice cover suppresses wind and wave noise. Ambient noise levels are highest during austral spring and summer, as surface noise, ice cracking and biological activity intensifies. Vocalizations of blue (Balaenoptera musculus) and fin (B. physalus) whales also dominate the long-term spectra records in the 15–28 and 89 Hz bands. Blue whale call energy is a maximum during austral summer-fall in the Drake Passage and Bransfield Strait when ambient noise levels are a maximum and sea-ice cover is a minimum. Fin whale vocalizations were also most common during austral summer-early fall months in both the Bransfield Strait and Scotia Sea. The hydrophone data overall do not show sustained anthropogenic sources (ships and airguns), likely due to low coastal traffic and the typically rough weather and sea conditions of the Southern Ocean. PMID:25875205

  3. Sources and levels of ambient ocean sound near the antarctic peninsula

    DOE PAGES

    Dziak, Robert P.; Bohnenstiehl, DelWayne R.; Stafford, Kathleen M.; ...

    2015-04-14

    Arrays of hydrophones were deployed within the Bransfield Strait and Scotia Sea (Antarctic Peninsula region) from 2005 to 2009 to record ambient ocean sound at frequencies of up to 125 and 500 Hz. Icequakes, which are broadband, short duration signals derived from fracturing of large free-floating icebergs, are a prominent feature of the ocean soundscape. Icequake activity peaks during austral summer and is minimum during winter, likely following freeze-thaw cycles. Iceberg grounding and rapid disintegration also releases significant acoustic energy, equivalent to large-scale geophysical events. Overall ambient sound levels can be as much as ~10–20 dB higher in the open,more » deep ocean of the Scotia Sea compared to the relatively shallow Bransfield Strait. Noise levels become lowest during the austral winter, as sea-ice cover suppresses wind and wave noise. Ambient noise levels are highest during austral spring and summer, as surface noise, ice cracking and biological activity intensifies. Vocalizations of blue ( Balaenoptera musculus) and fin ( B. physalus) whales also dominate the long-term spectra records in the 15–28 and 89 Hz bands. Blue whale call energy is a maximum during austral summer-fall in the Drake Passage and Bransfield Strait when ambient noise levels are a maximum and sea-ice cover is a minimum. Fin whale vocalizations were also most common during austral summer-early fall months in both the Bransfield Strait and Scotia Sea. The hydrophone data overall do not show sustained anthropogenic sources (ships and airguns), likely due to low coastal traffic and the typically rough weather and sea conditions of the Southern Ocean.« less

  4. Sources and levels of ambient ocean sound near the Antarctic Peninsula.

    PubMed

    Dziak, Robert P; Bohnenstiehl, DelWayne R; Stafford, Kathleen M; Matsumoto, Haruyoshi; Park, Minkyu; Lee, Won Sang; Fowler, Matt J; Lau, Tai-Kwan; Haxel, Joseph H; Mellinger, David K

    2015-01-01

    Arrays of hydrophones were deployed within the Bransfield Strait and Scotia Sea (Antarctic Peninsula region) from 2005 to 2009 to record ambient ocean sound at frequencies of up to 125 and 500 Hz. Icequakes, which are broadband, short duration signals derived from fracturing of large free-floating icebergs, are a prominent feature of the ocean soundscape. Icequake activity peaks during austral summer and is minimum during winter, likely following freeze-thaw cycles. Iceberg grounding and rapid disintegration also releases significant acoustic energy, equivalent to large-scale geophysical events. Overall ambient sound levels can be as much as ~10-20 dB higher in the open, deep ocean of the Scotia Sea compared to the relatively shallow Bransfield Strait. Noise levels become lowest during the austral winter, as sea-ice cover suppresses wind and wave noise. Ambient noise levels are highest during austral spring and summer, as surface noise, ice cracking and biological activity intensifies. Vocalizations of blue (Balaenoptera musculus) and fin (B. physalus) whales also dominate the long-term spectra records in the 15-28 and 89 Hz bands. Blue whale call energy is a maximum during austral summer-fall in the Drake Passage and Bransfield Strait when ambient noise levels are a maximum and sea-ice cover is a minimum. Fin whale vocalizations were also most common during austral summer-early fall months in both the Bransfield Strait and Scotia Sea. The hydrophone data overall do not show sustained anthropogenic sources (ships and airguns), likely due to low coastal traffic and the typically rough weather and sea conditions of the Southern Ocean.

  5. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    PubMed Central

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior

  6. Sound pressure levels generated at risk volume steps of portable listening devices: types of smartphone and genres of music.

    PubMed

    Kim, Gibbeum; Han, Woojae

    2018-05-01

    The present study estimated the sound pressure levels of various music genres at the volume steps that contemporary smartphones deliver, because these levels put the listener at potential risk for hearing loss. Using six different smartphones (Galaxy S6, Galaxy Note 3, iPhone 5S, iPhone 6, LG G2, and LG G3), the sound pressure levels for three genres of K-pop music (dance-pop, hip-hop, and pop-ballad) and a Billboard pop chart of assorted genres were measured through an earbud for the first risk volume that was at the risk sign proposed by the smartphones, as well as consecutive higher volumes using a sound level meter and artificial mastoid. The first risk volume step of the Galaxy S6 and the LG G2, among the six smartphones, had the significantly lowest (84.1 dBA) and highest output levels (92.4 dBA), respectively. As the volume step increased, so did the sound pressure levels. The iPhone 6 was loudest (113.1 dBA) at the maximum volume step. Of the music genres, dance-pop showed the highest output level (91.1 dBA) for all smartphones. Within the frequency range of 20~ 20,000 Hz, the sound pressure level peaked at 2000 Hz for all the smartphones. The results showed that the sound pressure levels of either the first volume step or the maximum volume step were not the same for the different smartphone models and genres of music, which means that the risk volume sign and its output levels should be unified across the devices for their users. In addition, the risk volume steps proposed by the latest smartphone models are high enough to cause noise-induced hearing loss if their users habitually listen to music at those levels.

  7. Do high sound pressure levels of crowing in roosters necessitate passive mechanisms for protection against self-vocalization?

    PubMed

    Claes, Raf; Muyshondt, Pieter G G; Dirckx, Joris J J; Aerts, Peter

    2018-02-01

    High sound pressure levels (>120dB) cause damage or death of the hair cells of the inner ear, hence causing hearing loss. Vocalization differences are present between hens and roosters. Crowing in roosters is reported to produce sound pressure levels of 100dB measured at a distance of 1m. In this study we measured the sound pressure levels that exist at the entrance of the outer ear canal. We hypothesize that roosters may benefit from a passive protective mechanism while hens do not require such a mechanism. Audio recordings at the level of the entrance of the outer ear canal of crowing roosters, made in this study, indeed show that a protective mechanism is needed as sound pressure levels can reach amplitudes of 142.3dB. Audio recordings made at varying distances from the crowing rooster show that at a distance of 0.5m sound pressure levels already drop to 102dB. Micro-CT scans of a rooster and chicken head show that in roosters the auditory canal closes when the beak is opened. In hens the diameter of the auditory canal only narrows but does not close completely. A morphological difference between the sexes in shape of a bursa-like slit which occurs in the outer ear canal causes the outer ear canal to close in roosters but not in hens. Copyright © 2017 Elsevier GmbH. All rights reserved.

  8. Acoustic characterization of a nonlinear vibroacoustic absorber at low frequencies and high sound levels

    NASA Astrophysics Data System (ADS)

    Chauvin, A.; Monteil, M.; Bellizzi, S.; Côte, R.; Herzog, Ph.; Pachebat, M.

    2018-03-01

    A nonlinear vibroacoustic absorber (Nonlinear Energy Sink: NES), involving a clamped thin membrane made in Latex, is assessed in the acoustic domain. This NES is here considered as an one-port acoustic system, analyzed at low frequencies and for increasing excitation levels. This dynamic and frequency range requires a suitable experimental technique, which is presented first. It involves a specific impedance tube able to deal with samples of sufficient size, and reaching high sound levels with a guaranteed linear response thank's to a specific acoustic source. The identification method presented here requires a single pressure measurement, and is calibrated from a set of known acoustic loads. The NES reflection coefficient is then estimated at increasing source levels, showing its strong level dependency. This is presented as a mean to understand energy dissipation. The results of the experimental tests are first compared to a nonlinear viscoelastic model of the membrane absorber. In a second step, a family of one degree of freedom models, treated as equivalent Helmholtz resonators is identified from the measurements, allowing a parametric description of the NES behavior over a wide range of levels.

  9. Multi-level basis selection of wavelet packet decomposition tree for heart sound classification.

    PubMed

    Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Abdullah Ramaiah, Asri Ranga

    2013-10-01

    Wavelet packet transform decomposes a signal into a set of orthonormal bases (nodes) and provides opportunities to select an appropriate set of these bases for feature extraction. In this paper, multi-level basis selection (MLBS) is proposed to preserve the most informative bases of a wavelet packet decomposition tree through removing less informative bases by applying three exclusion criteria: frequency range, noise frequency, and energy threshold. MLBS achieved an accuracy of 97.56% for classifying normal heart sound, aortic stenosis, mitral regurgitation, and aortic regurgitation. MLBS is a promising basis selection to be suggested for signals with a small range of frequencies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Peak Sound Pressure Levels and Associated Auditory Risk from an H[subscript 2]-Air "Egg-Splosion"

    ERIC Educational Resources Information Center

    Dolhun, John J.

    2016-01-01

    The noise level from exploding chemical demonstrations and the effect they could have on audiences, especially young children, needs attention. Auditory risk from H[subscript 2]- O2 balloon explosions have been studied, but no studies have been done on H[subscript 2]-air "eggsplosions." The peak sound pressure level (SPL) was measured…

  11. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    PubMed

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  12. Response Growth With Sound Level in Auditory-Nerve Fibers After Noise-Induced Hearing Loss

    PubMed Central

    Heinz, Michael G.; Young, Eric D.

    2010-01-01

    People with sensorineural hearing loss are often constrained by a reduced acoustic dynamic range associated with loudness recruitment; however, the neural correlates of loudness and recruitment are still not well understood. The growth of auditory-nerve (AN) activity with sound level was compared in normal-hearing cats and in cats with a noise-induced hearing loss to test the hypothesis that AN-fiber rate-level functions are steeper in impaired ears. Stimuli included best-frequency and fixed-frequency tones, broadband noise, and a brief speech token. Three types of impaired responses were observed. 1) Fibers with rate-level functions that were similar across all stimuli typically had broad tuning, consistent with outer-hair-cell (OHC) damage. 2) Fibers with a wide dynamic range and shallow slope above threshold often retained sharp tuning, consistent with primarily inner-hair-cell (IHC) damage. 3) Fibers with very steep rate-level functions for all stimuli had thresholds above approximately 80 dB SPL and very broad tuning, consistent with severe IHC and OHC damage. Impaired rate-level slopes were on average shallower than normal for tones, and were steeper in only limited conditions. There was less variation in rate-level slopes across stimuli in impaired fibers, presumably attributable to the lack of suppression-induced reductions in slopes for complex stimuli relative to BF-tone slopes. Sloping saturation was observed less often in impaired fibers. These results illustrate that AN fibers do not provide a simple representation of the basilar-membrane I/O function and suggest that both OHC and IHC damage can affect AN response growth. PMID:14534289

  13. Pressure sound level measurements at an educational environment in Goiânia, Goiás, Brazil

    NASA Astrophysics Data System (ADS)

    Costa, J. J. L.; do Nascimento, E. O.; de Oliveira, L. N.; Caldas, L. V. E.

    2018-03-01

    In this work, 25 points located on the ground floor of the Federal Institute of Education, Science and Technology of Goias - IFG - Campus Goiânia, were analyzed in morning periods of two Saturdays. The pressure sound levels were measured at internal and external environments during routine activities seeking to perform an environmental monitoring at this institution. The initial hypothesis was that an amusement park (Mutirama Park) was responsible for originating noise pollution in the institute, but the results showed, within the campus environment, sound pressure levels in accordance with the Municipal legislation of Goiânia for all points.

  14. Sound levels, hearing habits and hazards of using portable cassete players

    NASA Astrophysics Data System (ADS)

    Hellström, P.-A.; Axelsson, A.

    1988-12-01

    The maximum output sound pressure level ( SPL) from different types of portable cassette players (PCP's) and different headphones was analyzed by using KEMAR in one-third octave bands. The equivalent free-field dB(A) level (EqA-FFSL) was computed from the one-third octave bands corrected by the free-field to the eardrum transfer function. The dB(A) level varied between 104 dB from a low-cost PCP with supra-aural headphones (earphones with headbands and foam pads fitting against the pinna) to 126 dB from a high quality PCP with semi-aural headphones (small earphones without headbands to be used in the concha of the external ear). The cassette tapes used in this study were recorded with music, white noise, narrowband noise and pure tones. The equivalent and maximum SPL was measured in the ear canal (1 mm from eardrum) with the use of mini-microphones in 15 young subjects listening to pop music from PCP's at the highest level they considered comfortable. These SPL measurements corresponded to 112 dB(A) in free field. In a temporary threshold shift ( TTS) study, ten teenagers—four girls and six boys—listened to pop music for 1 h with PCP's at a level they enjoyed. The mean TTS value was 5-10 dB for frequencies between 1 and 8 kHz. In one subject the maximum TTS was 35 dB at 5-6 dB kHz. In order to acquire information about listening habits among youngsters using PCP's, 154 seventh and eighth graders (age 14-15) were interviewed. They used PCP's much less than expected during most of the year, but an increase was reported during the summer holidays.

  15. Development and Testing of a High Level Axial Array Duct Sound Source for the NASA Flow Impedance Test Facility

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)

    2000-01-01

    In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).

  16. Encoding of speech sounds at auditory brainstem level in good and poor hearing aid performers.

    PubMed

    Shetty, Hemanth Narayan; Puttabasappa, Manjula

    Hearing aids are prescribed to alleviate loss of audibility. It has been reported that about 31% of hearing aid users reject their own hearing aid because of annoyance towards background noise. The reason for dissatisfaction can be located anywhere from the hearing aid microphone till the integrity of neurons along the auditory pathway. To measure spectra from the output of hearing aid at the ear canal level and frequency following response recorded at the auditory brainstem from individuals with hearing impairment. A total of sixty participants having moderate sensorineural hearing impairment with age range from 15 to 65 years were involved. Each participant was classified as either Good or Poor Hearing aid Performers based on acceptable noise level measure. Stimuli /da/ and /si/ were presented through loudspeaker at 65dB SPL. At the ear canal, the spectra were measured in the unaided and aided conditions. At auditory brainstem, frequency following response were recorded to the same stimuli from the participants. Spectrum measured in each condition at ear canal was same in good hearing aid performers and poor hearing aid performers. At brainstem level, better F 0 encoding; F 0 and F 1 energies were significantly higher in good hearing aid performers than in poor hearing aid performers. Though the hearing aid spectra were almost same between good hearing aid performers and poor hearing aid performers, subtle physiological variations exist at the auditory brainstem. The result of the present study suggests that neural encoding of speech sound at the brainstem level might be mediated distinctly in good hearing aid performers from that of poor hearing aid performers. Thus, it can be inferred that subtle physiological changes are evident at the auditory brainstem in a person who is willing to accept noise from those who are not willing to accept noise. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier

  17. Application of the Extreme Value Distribution to Estimate the Uncertainty of Peak Sound Pressure Levels at the Workplace.

    PubMed

    Lenzuni, Paolo

    2015-07-01

    The purpose of this article is to develop a method for the statistical inference of the maximum peak sound pressure level and of the associated uncertainty. Both quantities are requested by the EU directive 2003/10/EC for a complete and solid assessment of the noise exposure at the workplace. Based on the characteristics of the sound pressure waveform, it is hypothesized that the distribution of the measured peak sound pressure levels follows the extreme value distribution. The maximum peak level is estimated as the largest member of a finite population following this probability distribution. The associated uncertainty is also discussed, taking into account not only the contribution due to the incomplete sampling but also the contribution due to the finite precision of the instrumentation. The largest of the set of measured peak levels underestimates the maximum peak sound pressure level. The underestimate can be as large as 4 dB if the number of measurements is limited to 3-4, which is common practice in occupational noise assessment. The extended uncertainty is also quite large (~2.5 dB), with a weak dependence on the sampling details. Following the procedure outlined in this article, a reliable comparison between the peak sound pressure levels measured in a workplace and the EU directive action limits is possible. Non-compliance can occur even when the largest of the set of measured peak levels is several dB below such limits. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  18. Possibilities of psychoacoustics to determine sound quality

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise

  19. Discovery of Sound in the Sea (DOSITS) Website Development

    DTIC Science & Technology

    2013-03-04

    life affect ocean sound levels? • Science of Sound > Sounds in the Sea > How will ocean acidification affect ocean sound levels? • Science of Sound...Science of Sound > Sounds in the Sea > How does shipping affect ocean sound levels? • Science of Sound > Sounds in the Sea > How does marine

  20. Multichannel loudness compensation method based on segmented sound pressure level for digital hearing aids

    NASA Astrophysics Data System (ADS)

    Liang, Ruiyu; Xi, Ji; Bao, Yongqiang

    2017-07-01

    To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.

  1. High levels of sound pressure: acoustic reflex thresholds and auditory complaints of workers with noise exposure.

    PubMed

    Duarte, Alexandre Scalli Mathias; Ng, Ronny Tah Yen; de Carvalho, Guilherme Machado; Guimarães, Alexandre Caixeta; Pinheiro, Laiza Araujo Mohana; Costa, Everardo Andrade da; Gusmão, Reinaldo Jordão

    2015-01-01

    The clinical evaluation of subjects with occupational noise exposure has been difficult due to the discrepancy between auditory complaints and auditory test results. This study aimed to evaluate the contralateral acoustic reflex thresholds of workers exposed to high levels of noise, and to compare these results to the subjects' auditory complaints. This clinical retrospective study evaluated 364 workers between 1998 and 2005; their contralateral acoustic reflexes were compared to auditory complaints, age, and noise exposure time by chi-squared, Fisher's, and Spearman's tests. The workers' age ranged from 18 to 50 years (mean=39.6), and noise exposure time from one to 38 years (mean=17.3). We found that 15.1% (55) of the workers had bilateral hearing loss, 38.5% (140) had bilateral tinnitus, 52.8% (192) had abnormal sensitivity to loud sounds, and 47.2% (172) had speech recognition impairment. The variables hearing loss, speech recognition impairment, tinnitus, age group, and noise exposure time did not show relationship with acoustic reflex thresholds; however, all complaints demonstrated a statistically significant relationship with Metz recruitment at 3000 and 4000Hz bilaterally. There was no significance relationship between auditory complaints and acoustic reflexes. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  2. The Effects of Linear Microphone Array Changes on Computed Sound Exposure Level Footprints

    NASA Technical Reports Server (NTRS)

    Mueller, Arnold W.; Wilson, Mark R.

    1997-01-01

    Airport land planning commissions often are faced with determining how much area around an airport is affected by the sound exposure levels (SELS) associated with helicopter operations. This paper presents a study of the effects changing the size and composition of a microphone array has on the computed SEL contour (ground footprint) areas used by such commissions. Descent flight acoustic data measured by a fifteen microphone array were reprocessed for five different combinations of microphones within this array. This resulted in data for six different arrays for which SEL contours were computed. The fifteen microphone array was defined as the 'baseline' array since it contained the greatest amount of data. The computations used a newly developed technique, the Acoustic Re-propagation Technique (ART), which uses parts of the NASA noise prediction program ROTONET. After the areas of the SEL contours were calculated the differences between the areas were determined. The area differences for the six arrays are presented that show a five and a three microphone array (with spacing typical of that required by the FAA FAR Part 36 noise certification procedure) compare well with the fifteen microphone array. All data were obtained from a database resulting from a joint project conducted by NASA and U.S. Army researchers at Langley and Ames Research Centers. A brief description of the joint project test design, microphone array set-up, and data reduction methodology associated with the database are discussed.

  3. Study on osteogenesis promoted by low sound pressure level infrasound in vivo and some underlying mechanisms.

    PubMed

    Long, Hua; Zheng, Liheng; Gomes, Fernando Cardoso; Zhang, Jinhui; Mou, Xiang; Yuan, Hua

    2013-09-01

    To clarify the effects of low sound pressure level (LSPL) infrasound on local bone turnover and explore its underlying mechanisms, femoral defected rats were stabilized with a single-side external fixator. After exposure to LSPL infrasound for 30min twice everyday for 6 weeks, the pertinent features of bone healing were assessed by radiography, peripheral quantitative computerized tomography (pQCT), histology and immunofluorescence assay. Infrasound group showed a more consecutive and smoother process of fracture healing and modeling in radiographs and histomorphology. It also showed significantly higher average bone mineral content (BMC) and bone mineral density (BMD). Immunofluorescence showed increased expression of calcitonin gene related peptide (CGRP) and decreased Neuropeptide Y (NPY) innervation in microenvironment. The results suggested the osteogenesis promotion effects of LSPL infrasound in vivo. Neuro-osteogenic network in local microenvironment was probably one target mediating infrasonic osteogenesis, which might provide new strategy to accelerate bone healing and remodeling. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Sound levels in modern rodent housing rooms are an uncontrolled environmental variable with fluctuations mainly due to human activities

    PubMed Central

    Lauer, Amanda M.; May, Bradford J.; Hao, Ziwei Judy; Watson, Julie

    2009-01-01

    Noise in animal housing facilities is an environmental variable that can affect hearing, behavior and physiology in mice. The authors measured sound levels in two rodent housing rooms (room 1 and room 2) during several periods of 24 h. Room 1, which was subject to heavy personnel traffic, contained ventilated racks and static cages that housed large numbers of mice. Room 2 was accessed by only a few staff members and contained only static cages that housed fewer mice. In both rooms, background sound levels were about 80 dB, and transient noises caused sound levels to temporarily rise 30–40 dB above the baseline level; such peaks occurred frequently during work hours (8:30 AM to 4:30 PM) and infrequently during non-work hours. Noise peaks during work hours in room 1 occurred about two times as often as in room 2 (P = 0.01). Use of changing stations located in the rooms caused background noise to increase by about 10 dB. Loud noise and noise variability were attributed mainly to personnel activity. Attempts to reduce noise should concentrate on controlling sounds produced by in-room activities and experimenter traffic; this may reduce the variability of research outcomes and improve animal welfare. PMID:19384312

  5. The Influence of Fundamental Frequency and Sound Pressure Level Range on Breathing Patterns in Female Classical Singing

    ERIC Educational Resources Information Center

    Collyer, Sally; Thorpe, C. William; Callaghan, Jean; Davis, Pamela J.

    2008-01-01

    Purpose: This study investigated the influence of fundamental frequency (F0) and sound pressure level (SPL) range on respiratory behavior in classical singing. Method: Five trained female singers performed an 8-s messa di voce (a crescendo and decrescendo on one F0) across their musical F0 range. Lung volume (LV) change was estimated, and…

  6. Exterior sound level measurements of over-snow vehicles at Yellowstone National Park.

    DOT National Transportation Integrated Search

    2008-09-30

    Sounds associated with oversnow vehicles, such as snowmobiles and snowcoaches, are an : important management concern at Yellowstone and Grand Teton National Parks. The John A. : Volpe National Transportation Systems Centers Environmental Measureme...

  7. A noisy spring: the impact of globally rising underwater sound levels on fish.

    PubMed

    Slabbekoorn, Hans; Bouton, Niels; van Opzeeland, Ilse; Coers, Aukje; ten Cate, Carel; Popper, Arthur N

    2010-07-01

    The underwater environment is filled with biotic and abiotic sounds, many of which can be important for the survival and reproduction of fish. Over the last century, human activities in and near the water have increasingly added artificial sounds to this environment. Very loud sounds of relatively short exposure, such as those produced during pile driving, can harm nearby fish. However, more moderate underwater noises of longer duration, such as those produced by vessels, could potentially impact much larger areas, and involve much larger numbers of fish. Here we call attention to the urgent need to study the role of sound in the lives of fish and to develop a better understanding of the ecological impact of anthropogenic noise. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Sound-level-dependent representation of frequency modulations in human auditory cortex: a low-noise fMRI study.

    PubMed

    Brechmann, André; Baumgart, Frank; Scheich, Henning

    2002-01-01

    Recognition of sound patterns must be largely independent of level and of masking or jamming background sounds. Auditory patterns of relevance in numerous environmental sounds, species-specific vocalizations and speech are frequency modulations (FM). Level-dependent activation of the human auditory cortex (AC) in response to a large set of upward and downward FM tones was studied with low-noise (48 dB) functional magnetic resonance imaging at 3 Tesla. Separate analysis in four territories of AC was performed in each individual brain using a combination of anatomical landmarks and spatial activation criteria for their distinction. Activation of territory T1b (including primary AC) showed the most robust level dependence over the large range of 48-102 dB in terms of activated volume and blood oxygen level dependent contrast (BOLD) signal intensity. The left nonprimary territory T2 also showed a good correlation of level with activated volume but, in contrast to T1b, not with BOLD signal intensity. These findings are compatible with level coding mechanisms observed in animal AC. A systematic increase of activation with level was not observed for T1a (anterior of Heschl's gyrus) and T3 (on the planum temporale). Thus these areas might not be specifically involved in processing of the overall intensity of FM. The rostral territory T1a of the left hemisphere exhibited highest activation when the FM sound level fell 12 dB below scanner noise. This supports the previously suggested special involvement of this territory in foreground-background decomposition tasks. Overall, AC of the left hemisphere showed a stronger level-dependence of signal intensity and activated volume than the right hemisphere. But any side differences of signal intensity at given levels were lateralized to right AC. This might point to an involvement of the right hemisphere in more specific aspects of FM processing than level coding.

  9. Methylation on the Circadian Gene BMAL1 Is Associated with the Effects of a Weight Loss Intervention on Serum Lipid Levels.

    PubMed

    Samblas, Mirian; Milagro, Fermin I; Gómez-Abellán, Purificación; Martínez, J Alfredo; Garaulet, Marta

    2016-06-01

    The circadian clock system has been linked to the onset and development of obesity and some accompanying comorbidities. Epigenetic mechanisms, such as DNA methylation, are putatively involved in the regulation of the circadian clock system. The aim of this study was to investigate the influence of a weight loss intervention based on an energy-controlled Mediterranean dietary pattern in the methylation levels of 3 clock genes, BMAL1, CLOCK, and NR1D1, and the association between the methylation levels and changes induced in the serum lipid profile with the weight loss treatment. The study sample enrolled 61 women (body mass index = 28.6 ± 3.4 kg/m(2); age: 42.2 ± 11.4 years), who followed a nutritional program based on a Mediterranean dietary pattern. DNA was isolated from whole blood obtained at the beginning and end point. Methylation levels at different CpG sites of BMAL1, CLOCK, and NR1D1 were analyzed by Sequenom's MassArray. The energy-restricted intervention modified the methylation levels of different CpG sites in BMAL1 (CpGs 5, 6, 7, 9, 11, and 18) and NR1D1 (CpGs 1, 10, 17, 18, 19, and 22). Changes in cytosine methylation in the CpG 5 to 9 region of BMAL1 with the intervention positively correlated with the eveningness profile (p = 0.019). The baseline methylation of the CpG 5 to 9 region in BMAL1 positively correlated with energy (p = 0.047) and carbohydrate (p = 0.017) intake and negatively correlated with the effect of the weight loss intervention on total cholesterol (p = 0.032) and low-density lipoprotein cholesterol (p = 0.005). Similar significant and positive correlations were found between changes in methylation levels in the CpG 5 to 9 region of BMAL1 due to the intervention and changes in serum lipids (p < 0.05). This research describes apparently for the first time an association between changes in the methylation of the BMAL1 gene with the intervention and the effects of a weight loss intervention on blood lipids levels. © 2016 The Author(s).

  10. Effects of acupuncture on the heart rate variability, cortisol levels and behavioural response induced by thunder sound in beagles.

    PubMed

    Maccariello, Carolina Elisabetta Martins; Franzini de Souza, Carla Caroline; Morena, Laura; Dias, Daniel Penteado Martins; Medeiros, Magda Alves de

    2018-03-15

    Sound stimuli such as fireworks, firearms, and claps of thunder have been used as a stress reactivity model for dogs. Acupuncture has been widely used to treat and prevent physiological and behavioural disorders induced by stress. Our study aims to evaluate the effects of acupuncture on cardiac autonomic modulation (heart rate variability - HRV), behavioural (reactivity) and endocrine (cortisol levels) responses in dogs exposed to sounds of thunder. Twenty-four laboratory beagles (12 males and 12 females, 1-6years old) with no history of phobia to thunder were subjected to a sound stimulus that consisted of a standardized recording of thunder over a 150s period with a maximum intensity of 103-104dB. Before the sound, the dogs underwent a 20-minute session of needle insertion at acupuncture points Yintang, GV20, HT7, PC6 and ST36 (ACUP), in non-points (NP) or left undisturbed (CTL). Cardiac intervals were recorded using a frequency meter (RS 800cx, Polar, Kempele, Finland) to evaluate the HRV, and the data were later analysed using CardioSeries v2.4.1 software. Acupuncture (ACUP) changed the sympathovagal balance with a shift towards parasympathetic modulation, reducing the prompt sound-induced increase in LF/HF (low frequency/high frequency) ratio and in the power of the LF band of the cardiac interval spectrum, and decreased the power of the HF band of the cardiac interval spectrum (p<0.05); however there was no change in the heart rate. Acupuncture reduced the behavioural response induced by sounds of thunder (when all behavioural parameters were considered together) and the behaviours hiding, restlessness, bolting and running around (when the parameters were analysed separately (p<0.05). There were no changes in cortisol levels due to the sound stimulus or acupuncture. Our results demonstrate that a session of acupuncture prior to sound stimulus can reduce cardiac autonomic and behavioural responses, without changing cortisol levels in beagles. Copyright © 2018

  11. Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences.

    PubMed

    Nilsson, Mats E; Schenkman, Bo N

    2016-02-01

    Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Noise exposure in movie theaters: a preliminary study of sound levels during the showing of 25 films.

    PubMed

    Warszawa, Anna; Sataloff, Robert T

    2010-09-01

    The harmful effects of noise exposure during leisure-time activities are beginning to receive some scrutiny. We conducted a preliminary study to investigate the noise levels during the showings of 25 different films. During each screening, various sound measurements were made with a dosimeter. The movies were classified on the basis of both their Motion Picture Association of America (MPAA) rating and their genre, and the size of the theater and the size of the audience were taken into consideration in the final analysis. Our findings suggest that the sound levels of many movies might be harmful to hearing, although we can draw no definitive conclusions. We did not discern any relationship between noise levels and either MPAA rating or genre. Further studies are recommended.

  13. A Mixed-Methods Trial of Broad Band Noise and Nature Sounds for Tinnitus Therapy: Group and Individual Responses Modeled under the Adaptation Level Theory of Tinnitus.

    PubMed

    Durai, Mithila; Searchfield, Grant D

    2017-01-01

    Objectives: A randomized cross-over trial in 18 participants tested the hypothesis that nature sounds, with unpredictable temporal characteristics and high valence would yield greater improvement in tinnitus than constant, emotionally neutral broadband noise. Study Design: The primary outcome measure was the Tinnitus Functional Index (TFI). Secondary measures were: loudness and annoyance ratings, loudness level matches, minimum masking levels, positive and negative emotionality, attention reaction and discrimination time, anxiety, depression and stress. Each sound was administered using MP3 players with earbuds for 8 continuous weeks, with a 3 week wash-out period before crossing over to the other treatment sound. Measurements were undertaken for each arm at sound fitting, 4 and 8 weeks after administration. Qualitative interviews were conducted at each of these appointments. Results: From a baseline TFI score of 41.3, sound therapy resulted in TFI scores at 8 weeks of 35.6; broadband noise resulted in significantly greater reduction (8.2 points) after 8 weeks of sound therapy use than nature sounds (3.2 points). The positive effect of sound on tinnitus was supported by secondary outcome measures of tinnitus, emotion, attention, and psychological state, but not interviews. Tinnitus loudness level match was higher for BBN at 8 weeks; while there was little change in loudness level matches for nature sounds. There was no change in minimum masking levels following sound therapy administration. Self-reported preference for one sound over another did not correlate with changes in tinnitus. Conclusions: Modeled under an adaptation level theory framework of tinnitus perception, the results indicate that the introduction of broadband noise shifts internal adaptation level weighting away from the tinnitus signal, reducing tinnitus magnitude. Nature sounds may modify the affective components of tinnitus via a secondary, residual pathway, but this appears to be less important

  14. A Mixed-Methods Trial of Broad Band Noise and Nature Sounds for Tinnitus Therapy: Group and Individual Responses Modeled under the Adaptation Level Theory of Tinnitus

    PubMed Central

    Durai, Mithila; Searchfield, Grant D.

    2017-01-01

    Objectives: A randomized cross-over trial in 18 participants tested the hypothesis that nature sounds, with unpredictable temporal characteristics and high valence would yield greater improvement in tinnitus than constant, emotionally neutral broadband noise. Study Design: The primary outcome measure was the Tinnitus Functional Index (TFI). Secondary measures were: loudness and annoyance ratings, loudness level matches, minimum masking levels, positive and negative emotionality, attention reaction and discrimination time, anxiety, depression and stress. Each sound was administered using MP3 players with earbuds for 8 continuous weeks, with a 3 week wash-out period before crossing over to the other treatment sound. Measurements were undertaken for each arm at sound fitting, 4 and 8 weeks after administration. Qualitative interviews were conducted at each of these appointments. Results: From a baseline TFI score of 41.3, sound therapy resulted in TFI scores at 8 weeks of 35.6; broadband noise resulted in significantly greater reduction (8.2 points) after 8 weeks of sound therapy use than nature sounds (3.2 points). The positive effect of sound on tinnitus was supported by secondary outcome measures of tinnitus, emotion, attention, and psychological state, but not interviews. Tinnitus loudness level match was higher for BBN at 8 weeks; while there was little change in loudness level matches for nature sounds. There was no change in minimum masking levels following sound therapy administration. Self-reported preference for one sound over another did not correlate with changes in tinnitus. Conclusions: Modeled under an adaptation level theory framework of tinnitus perception, the results indicate that the introduction of broadband noise shifts internal adaptation level weighting away from the tinnitus signal, reducing tinnitus magnitude. Nature sounds may modify the affective components of tinnitus via a secondary, residual pathway, but this appears to be less important

  15. Transfer of knowledge from sound quality measurement to noise impact evaluation

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    2004-05-01

    It is well known that the measurement and analysis of sound quality requires a complex procedure with consideration of the physical, psychoacoustical and psychological aspects of sound. Sound quality cannot be described only by a simple value based on A-weighted sound pressure level measurements. The A-weighted sound pressure level is sufficient to predict the probabilty that the human ear could be damaged by sound but the A-weighted level is not the correct descriptor for the annoyance of a complex sound situation given by several different sound events at different and especially moving positions (soundscape). On the one side, the consideration of the spectral distribution and the temporal pattern (psychoacoustics) is requested and, on the other side, the subjective attitude with respect to the sound situation, the expectation and experience of the people (psychology) have to be included in context with the complete noise impact evaluation. This paper describes applications of the newest methods of sound quality measurements-as it is well introduced at the car manufacturers-based on artifical head recordings and signal processing comparable to the human hearing used in noisy environments like community/traffic noise.

  16. Observing and Producing Sounds, Elementary School Science, Level Four, Teaching Manual.

    ERIC Educational Resources Information Center

    Hale, Helen E.

    This pilot teaching unit is one of a series developed for use in elementary school science programs. This unit is designed to help children discover specific concepts which relate to sound, such as volume, pitch, and echo. The student activities employ important scientific processes, such as observation, communication, inference, classification,…

  17. Conceptual Level of Understanding about Sound Concept: Sample of Fifth Grade Students

    ERIC Educational Resources Information Center

    Bostan Sarioglan, Ayberk

    2016-01-01

    In this study, students' conceptual change processes related to the sound concept were examined. Study group was comprises of 325 fifth grade middle school students. Three multiple-choice questions were used as the data collection tool. At the data analysis process "scientific response", "scientifically unacceptable response"…

  18. Narrow sound pressure level tuning in the auditory cortex of the bats Molossus molossus and Macrotus waterhousii.

    PubMed

    Macías, Silvio; Hechavarría, Julio C; Cobo, Ariadna; Mora, Emanuel C

    2014-03-01

    In the auditory system, tuning to sound level appears in the form of non-monotonic response-level functions that depict the response of a neuron to changing sound levels. Neurons with non-monotonic response-level functions respond best to a particular sound pressure level (defined as "best level" or level evoking the maximum response). We performed a comparative study on the location and basic functional organization of the auditory cortex in the gleaning bat, Macrotus waterhousii, and the aerial-hawking bat, Molossus molossus. Here, we describe the response-level function of cortical units in these two species. In the auditory cortices of M. waterhousii and M. molossus, the characteristic frequency of the units increased from caudal to rostral. In M. waterhousii, there was an even distribution of characteristic frequencies while in M. molossus there was an overrepresentation of frequencies present within echolocation pulses. In both species, most of the units showed best levels in a narrow range, without an evident topography in the amplitopic organization, as described in other species. During flight, bats decrease the intensity of their emitted pulses when they approach a prey item or an obstacle resulting in maintenance of perceived echo intensity. Narrow level tuning likely contributes to the extraction of echo amplitudes facilitating echo-intensity compensation. For aerial-hawking bats, like M. molossus, receiving echoes within the optimal sensitivity range can help the bats to sustain consistent analysis of successive echoes without distortions of perception caused by changes in amplitude. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Investigation of the Sound Pressure Level (SPL) of earphones during music listening with the use of physical ear canal models

    NASA Astrophysics Data System (ADS)

    Aying, K. P.; Otadoy, R. E.; Violanda, R.

    2015-06-01

    This study investigates on the sound pressure level (SPL) of insert-type earphones that are commonly used for music listening of the general populace. Measurements of SPL from earphones of different respondents were measured by plugging the earphone to a physical ear canal model. Durations of the earphone used for music listening were also gathered through short interviews. Results show that 21% of the respondents exceed the standard loudness/duration relation recommended by the World Health Organization (WHO).

  20. The clarinet: how blowing pressure, lip force, lip position and reed "hardness" affect pitch, sound level, and spectrum.

    PubMed

    Almeida, Andre; George, David; Smith, John; Wolfe, Joe

    2013-09-01

    Using an automated clarinet playing system, the frequency f, sound level L, and spectral characteristics are measured as functions of blowing pressure P and the force F applied by the mechanical lip at different places on the reed. The playing regime on the (P,F) plane lies below an extinction line F(P) with a negative slope of a few square centimeters and above a pressure threshold with a more negative slope. Lower values of F and P can produce squeaks. Over much of the playing regime, lines of equal frequency have negative slope. This is qualitatively consistent with passive reed behavior: Increasing F or P gradually closes the reed, reducing its equivalent acoustic compliance, which increases the frequency of the peaks of the parallel impedance of bore and reed. High P and low F produce the highest sound levels and stronger higher harmonics. At low P, sound level can be increased at constant frequency by increasing P while simultaneously decreasing F. At high P, where lines of equal f and of equal L are nearly parallel, this compensation is less effective. Applying F further from the mouthpiece tip moves the playing regime to higher F and P, as does a stiffer reed.

  1. Comparison of measured and calculated sound pressure levels around a large horizontal axis wind turbine generator

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Willshire, William L., Jr.; Hubbard, Harvey H.

    1989-01-01

    Results are reported from a large number of simultaneous acoustic measurements around a large horizontal axis downwind configuration wind turbine generator. In addition, comparisons are made between measurements and calculations of both the discrete frequency rotational harmonics and the broad band noise components. Sound pressure time histories and noise radiation patterns as well as narrow band and broadband noise spectra are presented for a range of operating conditions. The data are useful for purposes of environmental impact assessment.

  2. A Trainable Hearing Aid Algorithm Reflecting Individual Preferences for Degree of Noise-Suppression, Input Sound Level, and Listening Situation.

    PubMed

    Yoon, Sung Hoon; Nam, Kyoung Won; Yook, Sunhyun; Cho, Baek Hwan; Jang, Dong Pyo; Hong, Sung Hwa; Kim, In Young

    2017-03-01

    In an effort to improve hearing aid users' satisfaction, recent studies on trainable hearing aids have attempted to implement one or two environmental factors into training. However, it would be more beneficial to train the device based on the owner's personal preferences in a more expanded environmental acoustic conditions. Our study aimed at developing a trainable hearing aid algorithm that can reflect the user's individual preferences in a more extensive environmental acoustic conditions (ambient sound level, listening situation, and degree of noise suppression) and evaluated the perceptual benefit of the proposed algorithm. Ten normal hearing subjects participated in this study. Each subjects trained the algorithm to their personal preference and the trained data was used to record test sounds in three different settings to be utilized to evaluate the perceptual benefit of the proposed algorithm by performing the Comparison Mean Opinion Score test. Statistical analysis revealed that of the 10 subjects, four showed significant differences in amplification constant settings between the noise-only and speech-in-noise situation ( P <0.05) and one subject also showed significant difference between the speech-only and speech-in-noise situation ( P <0.05). Additionally, every subject preferred different β settings for beamforming in all different input sound levels. The positive findings from this study suggested that the proposed algorithm has potential to improve hearing aid users' personal satisfaction under various ambient situations.

  3. The limits of applicability of the sound exposure level (SEL) metric to temporal threshold shifts (TTS) in beluga whales, Delphinapterus leucas.

    PubMed

    Popov, Vladimir V; Supin, Alexander Ya; Rozhnov, Viatcheslav V; Nechaev, Dmitry I; Sysueva, Evgenia V

    2014-05-15

    The influence of fatiguing sound level and duration on post-exposure temporary threshold shift (TTS) was investigated in two beluga whales (Delphinapterus leucas). The fatiguing sound was half-octave noise with a center frequency of 22.5 kHz. TTS was measured at a test frequency of 32 kHz. Thresholds were measured by recording rhythmic evoked potentials (the envelope following response) to a test series of short (eight cycles) tone pips with a pip rate of 1000 s(-1). TTS increased approximately proportionally to the dB measure of both sound pressure (sound pressure level, SPL) and duration of the fatiguing noise, as a product of these two variables. In particular, when the noise parameters varied in a manner that maintained the product of squared sound pressure and time (sound exposure level, SEL, which is equivalent to the overall noise energy) at a constant level, TTS was not constant. Keeping SEL constant, the highest TTS appeared at an intermediate ratio of SPL to sound duration and decreased at both higher and lower ratios. Multiplication (SPL multiplied by log duration) better described the experimental data than an equal-energy (equal SEL) model. The use of SEL as a sole universal metric may result in an implausible assessment of the impact of a fatiguing sound on hearing thresholds in odontocetes, including under-evaluation of potential risks. © 2014. Published by The Company of Biologists Ltd.

  4. Underwater Sound Levels at a Wave Energy Device Testing Facility in Falmouth Bay, UK.

    PubMed

    Garrett, Joanne K; Witt, Matthew J; Johanning, Lars

    2016-01-01

    Passive acoustic monitoring devices were deployed at FaBTest in Falmouth Bay, UK, a marine renewable energy device testing facility during trials of a wave energy device. The area supports considerable commercial shipping and recreational boating along with diverse marine fauna. Noise monitoring occurred during (1) a baseline period, (2) installation activity, (3) the device in situ with inactive power status, and (4) the device in situ with active power status. This paper discusses the preliminary findings of the sound recording at FabTest during these different activity periods of a wave energy device trial.

  5. Noise and low-frequency sound levels due to aerial fireworks and prediction of the occupational exposure of pyrotechnicians to noise

    PubMed Central

    Tanaka, Tagayasu; Inaba, Ryoichi; Aoyama, Atsuhito

    2016-01-01

    Objectives: This study investigated the actual situation of noise and low-frequency sounds in firework events and their impact on pyrotechnicians. Methods: Data on firework noise and low-frequency sounds were obtained at a point located approximately 100 m away from the launch site of a firework display held in "A" City in 2013. We obtained the data by continuously measuring and analyzing the equivalent continuous sound level (Leq) and the one-third octave band of the noise and low-frequency sounds emanating from the major firework detonations, and predicted sound levels at the original launch site. Results: Sound levels of 100-115 dB and low-frequency sounds of 100-125 dB were observed at night. The maximum and mean Leq values were 97 and 95 dB, respectively. The launching noise level predicted from the sounds (85 dB) at the noise measurement point was 133 dB. Occupational exposure to noise for pyrotechnicians at the remote operation point (located 20-30 m away from the launch site) was estimated to be below 100 dB. Conclusions: Pyrotechnicians are exposed to very loud noise (>100 dB) at the launch point. We believe that it is necessary to implement measures such as fixing earplugs or earmuffs, posting a warning at the workplace, and executing a remote launching operation to prevent hearing loss caused by occupational exposure of pyrotechnicians to noise. It is predicted that both sound levels and low-frequency sounds would be reduced by approximately 35 dB at the remote operation site. PMID:27725489

  6. Noise and low-frequency sound levels due to aerial fireworks and prediction of the occupational exposure of pyrotechnicians to noise.

    PubMed

    Tanaka, Tagayasu; Inaba, Ryoichi; Aoyama, Atsuhito

    2016-11-29

    This study investigated the actual situation of noise and low-frequency sounds in firework events and their impact on pyrotechnicians. Data on firework noise and low-frequency sounds were obtained at a point located approximately 100 m away from the launch site of a firework display held in "A" City in 2013. We obtained the data by continuously measuring and analyzing the equivalent continuous sound level (Leq) and the one-third octave band of the noise and low-frequency sounds emanating from the major firework detonations, and predicted sound levels at the original launch site. Sound levels of 100-115 dB and low-frequency sounds of 100-125 dB were observed at night. The maximum and mean Leq values were 97 and 95 dB, respectively. The launching noise level predicted from the sounds (85 dB) at the noise measurement point was 133 dB. Occupational exposure to noise for pyrotechnicians at the remote operation point (located 20-30 m away from the launch site) was estimated to be below 100 dB. Pyrotechnicians are exposed to very loud noise (>100 dB) at the launch point. We believe that it is necessary to implement measures such as fixing earplugs or earmuffs, posting a warning at the workplace, and executing a remote launching operation to prevent hearing loss caused by occupational exposure of pyrotechnicians to noise. It is predicted that both sound levels and low-frequency sounds would be reduced by approximately 35 dB at the remote operation site.

  7. Noise induced hearing loss in dance music disc jockeys and an examination of sound levels in nightclubs.

    PubMed

    Bray, Adam; Szymański, Marcin; Mills, Robert

    2004-02-01

    Noise exposure, hearing loss and associated otological symptoms have been studied in a group of 23 disc jockeys using a questionnaire and pure tone audiometry. The level of noise exposure in the venues where they work has also been studied using Ametek Mk-3 audio dosimeters. Three members of the study group showed clear evidence of noise-induced hearing loss on audiometry, 70 per cent reported temporary threshold shift after sessions and 74 per cent reported tinnitus. Sound levels of up to 108 dB(A) were recorded in the nightclubs. The average level for a typical session was 96 dB(A) which is above the level at which the provision of ear protection is mandatory for employers in industry. It can be concluded that DJs are at substantial risk of developing noise-induced hearing loss and noise exposure in nightclubs frequently exceeds safe levels.

  8. So small, so loud: extremely high sound pressure level from a pygmy aquatic insect (Corixidae, Micronectinae).

    PubMed

    Sueur, Jérôme; Mackie, David; Windmill, James F C

    2011-01-01

    To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6-82.2) SPL rms re 2.10(-5) Pa with a peak at 99.2 (85.7-104.6) SPL re 2.10(-5) Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure.

  9. So Small, So Loud: Extremely High Sound Pressure Level from a Pygmy Aquatic Insect (Corixidae, Micronectinae)

    PubMed Central

    Sueur, Jérôme; Mackie, David; Windmill, James F. C.

    2011-01-01

    To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6–82.2) SPL rms re 2.10−5 Pa with a peak at 99.2 (85.7–104.6) SPL re 2.10−5 Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure. PMID:21698252

  10. Safety limit warning levels for the avoidance of excessive sound amplification to protect against further hearing loss.

    PubMed

    Johnson, Earl E

    2017-11-01

    To determine safe output sound pressure levels (SPL) for sound amplification devices to preserve hearing sensitivity after usage. A mathematical model consisting of the Modified Power Law (MPL) (Humes & Jesteadt, 1991 ) combined with equations for predicting temporary threshold shift (TTS) and subsequent permanent threshold shift (PTS) (Macrae, 1994b ) was used to determine safe output SPL. The study involves no new human subject measurements of loudness tolerance or threshold shifts. PTS was determined by the MPL model for 234 audiograms and the SPL output recommended by four different validated prescription recommendations for hearing aids. PTS can, on rare occasion, occur as a result of SPL delivered by hearing aids at modern day prescription recommendations. The trading relationship of safe output SPL, decibel hearing level (dB HL) threshold, and PTS was captured with algebraic expressions. Better hearing thresholds lowered the safe output SPL and higher thresholds raised the safe output SPL. Safe output SPL can consider the magnitude of unaided hearing loss. For devices not set to prescriptive levels, limiting the output SPL below the safe levels identified should protect against threshold worsening as a result of long-term usage.

  11. Geluidsexpositie bij Gebruik van Otoplastieken met Communicatie (Sound Exposure Level of F-16 Crew Chiefs Using Custom Molded Communications Earplugs)

    DTIC Science & Technology

    2008-10-01

    346 35 39 77 lnfo-DenV@tno nl Datum oktober 2008 Auteur (s) dr. ir. MM..I. Houben J.A. Verhave Rubricering rapport Ongerubriceerd Vastgesteld... Auteur (s) dr. ir. M.M.J. Houben J.A. Verhave Rubricering rapport Ongerubriceerd TNO-rapport | TNO-DV 2008 A395 4/19 Summary Sound exposure level...ontwikkeld om de geluidsexpositie van CEPs te bepalen (TNO-project 032.13072, rapport TNO-DV 2008 A054) [1], In theorie kan het totale

  12. Short- and long-term monitoring of underwater sound levels in the Hudson River (New York, USA).

    PubMed

    Martin, S Bruce; Popper, Arthur N

    2016-04-01

    There is a growing body of research on natural and man-made sounds that create aquatic soundscapes. Less is known about the soundscapes of shallow waters, such as in harbors, rivers, and lakes. Knowledge of soundscapes is needed as a baseline against which to determine the changes in noise levels resulting from human activities. To provide baseline data for the Hudson River at the site of the Tappan Zee Bridge, 12 acoustic data loggers were deployed for a 24-h period at ranges of 0-3000 m from the bridge, and four of the data loggers were re-deployed for three months of continuous recording. Results demonstrate that this region of the river is relatively quiet compared to open ocean conditions and other large river systems. Moreover, the soundscape had temporal and spatial diversity. The temporal patterns of underwater noise from the bridge change with the cadence of human activity. Bridge noise (e.g., road traffic) was only detected within 300 m; farther from the bridge, boating activity increased sound levels during the day, and especially on the weekend. Results also suggest that recording near the river bottom produced lower pseudo-noise levels than previous studies that recorded in the river water column.

  13. Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds

    PubMed Central

    Yamada, Tomomi; Kuwano, Sonoko; Ebisu, Shigeyuki; Hayashi, Mikako

    2016-01-01

    The sound produced by a dental air turbine handpiece (dental drill) can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq) and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: “metallic and unpleasant” and “powerful”. LAeq had a strong relationship with “powerful impression”, calculated sharpness was positively related to “metallic impression”, and “unpleasant impression” was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating

  14. Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds.

    PubMed

    Yamada, Tomomi; Kuwano, Sonoko; Ebisu, Shigeyuki; Hayashi, Mikako

    2016-01-01

    The sound produced by a dental air turbine handpiece (dental drill) can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq) and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: "metallic and unpleasant" and "powerful". LAeq had a strong relationship with "powerful impression", calculated sharpness was positively related to "metallic impression", and "unpleasant impression" was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating a comfortable sound

  15. Evaluating signal-to-noise ratios, loudness, and related measures as indicators of airborne sound insulation.

    PubMed

    Park, H K; Bradley, J S

    2009-09-01

    Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.

  16. Aircraft noise-induced awakenings are more reasonably predicted from relative than from absolute sound exposure levels.

    PubMed

    Fidell, Sanford; Tabachnick, Barbara; Mestre, Vincent; Fidell, Linda

    2013-11-01

    Assessment of aircraft noise-induced sleep disturbance is problematic for several reasons. Current assessment methods are based on sparse evidence and limited understandings; predictions of awakening prevalence rates based on indoor absolute sound exposure levels (SELs) fail to account for appreciable amounts of variance in dosage-response relationships and are not freely generalizable from airport to airport; and predicted awakening rates do not differ significantly from zero over a wide range of SELs. Even in conjunction with additional predictors, such as time of night and assumed individual differences in "sensitivity to awakening," nominally SEL-based predictions of awakening rates remain of limited utility and are easily misapplied and misinterpreted. Probabilities of awakening are more closely related to SELs scaled in units of standard deviates of local distributions of aircraft SELs, than to absolute sound levels. Self-selection of residential populations for tolerance of nighttime noise and habituation to airport noise environments offer more parsimonious and useful explanations for differences in awakening rates at disparate airports than assumed individual differences in sensitivity to awakening.

  17. Sound-localization experiments with barn owls in virtual space: influence of broadband interaural level different on head-turning behavior.

    PubMed

    Poganiatz, I; Wagner, H

    2001-04-01

    Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.

  18. Comparative analysis of performance in reading and writing of children exposed and not exposed to high sound pressure levels.

    PubMed

    Santos, Juliana Feitosa dos; Souza, Ana Paula Ramos de; Seligman, Lilian

    2013-01-01

    To analyze the possible relationships between high sound pressure levels in the classroom and performance in the use of lexical and phonological routes in reading and writing. This consisted on a quantitative and exploratory study. The following measures were carried out: acoustic measurement, using the dosimeter, visual inspection of the external auditory canal, tonal audiometry thresholds, speech recognition tests and acoustic immittance; instrument for evaluation of reading and writing of isolated words. The non-parametric χ² test and Fisher's exact test were used for data analysis. The results of acoustic measurements in 4 schools in Santa Maria divided the sample of 87 children of third and fourth years of primary school, aged 8 to 10 years, in 2 groups. The 1st group was exposed to sound levels higher than 80 dB(A) (Study group) and the 2nd group at levels lower than 80 dB(A) (Control group). Higher prevalence of correct answers in reading and writing of nonwords, reading irregular words and frequency effect were observed. Predominance of correct answers in the writing of irregular words was observed in the Control group. For the Study group, a higher number of type errors neologism in reading and writing were observed, especially regarding the writing of nonwords and the extension effect; fewer errors of lexicalization type and verbal paragraphy in writing were observed. In assessing the reading and writing skills, children in the Study group exposed to high noise levels had poorer performance in the use of lexical and phonological routes, both in reading and in writing.

  19. Breath sounds

    MedlinePlus

    The lung sounds are best heard with a stethoscope. This is called auscultation. Normal lung sounds occur ... the bottom of the rib cage. Using a stethoscope, the doctor may hear normal breathing sounds, decreased ...

  20. The effect of spatial distribution on the annoyance caused by simultaneous sounds

    NASA Astrophysics Data System (ADS)

    Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas

    2004-05-01

    A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.

  1. Smartphone-based noise mapping: Integrating sound level meter app data into the strategic noise mapping process.

    PubMed

    Murphy, Enda; King, Eoin A

    2016-08-15

    The strategic noise mapping process of the EU has now been ongoing for more than ten years. However, despite the fact that a significant volume of research has been conducted on the process and related issues there has been little change or innovation in how relevant authorities and policymakers are conducting the process since its inception. This paper reports on research undertaken to assess the possibility for smartphone-based noise mapping data to be integrated into the traditional strategic noise mapping process. We compare maps generated using the traditional approach with those generated using smartphone-based measurement data. The advantage of the latter approach is that it has the potential to remove the need for exhaustive input data into the source calculation model for noise prediction. In addition, the study also tests the accuracy of smartphone-based measurements against simultaneous measurements taken using traditional sound level meters in the field. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. In Situ Mortality Experiments with Juvenile Sea Bass (Dicentrarchus labrax) in Relation to Impulsive Sound Levels Caused by Pile Driving of Windmill Foundations

    PubMed Central

    Debusschere, Elisabeth; De Coensel, Bert; Bajek, Aline; Botteldooren, Dick; Hostens, Kris; Vanaverbeke, Jan; Vandendriessche, Sofie; Van Ginderdeuren, Karl; Vincx, Magda; Degraer, Steven

    2014-01-01

    Impact assessments of offshore wind farm installations and operations on the marine fauna are performed in many countries. Yet, only limited quantitative data on the physiological impact of impulsive sounds on (juvenile) fishes during pile driving of offshore wind farm foundations are available. Our current knowledge on fish injury and mortality due to pile driving is mainly based on laboratory experiments, in which high-intensity pile driving sounds are generated inside acoustic chambers. To validate these lab results, an in situ field experiment was carried out on board of a pile driving vessel. Juvenile European sea bass (Dicentrarchus labrax) of 68 and 115 days post hatching were exposed to pile-driving sounds as close as 45 m from the actual pile driving activity. Fish were exposed to strikes with a sound exposure level between 181 and 188 dB re 1 µPa2.s. The number of strikes ranged from 1739 to 3067, resulting in a cumulative sound exposure level between 215 and 222 dB re 1 µPa2.s. Control treatments consisted of fish not exposed to pile driving sounds. No differences in immediate mortality were found between exposed and control fish groups. Also no differences were noted in the delayed mortality up to 14 days after exposure between both groups. Our in situ experiments largely confirm the mortality results of the lab experiments found in other studies. PMID:25275508

  3. Validation of the Predicted Circumferential and Radial Mode Sound Power Levels in the Inlet and Exhaust Ducts of a Fan Ingesting Distorted Inflow

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    2012-01-01

    Fan inflow distortion tone noise has been studied computationally and experimentally. Data from two experiments in the NASA Glenn Advanced Noise Control Fan rig have been used to validate acoustic predictions. The inflow to the fan was distorted by cylindrical rods inserted radially into the inlet duct one rotor chord length upstream of the fan. The rods were arranged in both symmetric and asymmetric circumferential patterns. In-duct and farfield sound pressure level measurements were recorded. It was discovered that for positive circumferential modes, measured circumferential mode sound power levels in the exhaust duct were greater than those in the inlet duct and for negative circumferential modes, measured total circumferential mode sound power levels in the exhaust were less than those in the inlet. Predicted trends in overall sound power level were proven to be useful in identifying circumferentially asymmetric distortion patterns that reduce overall inlet distortion tone noise, as compared to symmetric arrangements of rods. Detailed comparisons between the measured and predicted radial mode sound power in the inlet and exhaust duct indicate limitations of the theory.

  4. The effects of age, physical activity level, and body anthropometry on calcaneal speed of sound value in men.

    PubMed

    Chin, Kok-Yong; Soelaiman, Ima-Nirwana; Mohamed, Isa Naina; Ibrahim, Suraya; Wan Ngah, Wan Zurinah

    2012-01-01

    The influences of age, physical activity, and body anthropometry on calcaneal speed of sound are different among young adults, middle-aged, and elderly men. Quantitative ultrasound assessment of bone health status is much needed for developing countries in the screening of osteoporosis, but further studies on the factors that influence the quantitative ultrasound indices are required. The present study examined the influence of age, lifestyle factors, and body anthropometry on calcaneal speed of sound (SOS) in a group of Malaysian men of diverse age range. A cross-sectional study was conducted, and data from 687 eligible males were used for analysis. They answered a detailed questionnaire on their physical activity status, and their anthropometric measurements were taken. Their calcaneal SOS values were evaluated using the CM-200 sonometer (Furuno, Nishinomiya City, Japan). Subjects with higher body mass index (BMI) had higher calcaneal SOS values albeit significant difference was only found in the elderly subjects (p < 0.05). Sedentary subjects had lower calcaneal SOS values than physically active subjects, but significant difference was only found in the middle-aged subjects (p < 0.05). Calcaneal SOS was significantly (p < 0.05) correlated with age in young men; height, BMI, and physical activity score in middle-aged men; height and physical activity score in elderly men; and age and physical activity score for overall subjects. In a multivariate regression model, significant (p < 0.05) predictors for calcaneal SOS included age for young men; physical activity, BMI, body fat percentage, and height for middle-aged men; height for elderly men; and age, height, physical activity, weight, and body fat percentage for overall subjects. Age, body anthropometry, and physical activity level have significant effects on the calcaneal SOS value in men.

  5. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    PubMed

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  6. Offshore exposure experiments on cuttlefish indicate received sound pressure and particle motion levels associated with acoustic trauma

    PubMed Central

    Solé, Marta; Sigray, Peter; Lenoir, Marc; van der Schaar, Mike; Lalander, Emilia; André, Michel

    2017-01-01

    Recent findings on cephalopods in laboratory conditions showed that exposure to artificial noise had a direct consequence on the statocyst, sensory organs, which are responsible for their equilibrium and movements in the water column. The question remained about the contribution of the consequent near-field particle motion influence from the tank walls, to the triggering of the trauma. Offshore noise controlled exposure experiments (CEE) on common cuttlefish (Sepia officinalis), were conducted at three different depths and distances from the source and particle motion and sound pressure measurements were performed at each location. Scanning electron microscopy (SEM) revealed injuries in statocysts, which severity was quantified and found to be proportional to the distance to the transducer. These findings are the first evidence of cephalopods sensitivity to anthropogenic noise sources in their natural habitat. From the measured received power spectrum of the sweep, it was possible to determine that the animals were exposed at levels ranging from 139 to 142 dB re 1 μPa2 and from 139 to 141 dB re 1 μPa2, at 1/3 octave bands centred at 315 Hz and 400 Hz, respectively. These results could therefore be considered a coherent threshold estimation of noise levels that can trigger acoustic trauma in cephalopods. PMID:28378762

  7. Offshore exposure experiments on cuttlefish indicate received sound pressure and particle motion levels associated with acoustic trauma

    NASA Astrophysics Data System (ADS)

    Solé, Marta; Sigray, Peter; Lenoir, Marc; van der Schaar, Mike; Lalander, Emilia; André, Michel

    2017-04-01

    Recent findings on cephalopods in laboratory conditions showed that exposure to artificial noise had a direct consequence on the statocyst, sensory organs, which are responsible for their equilibrium and movements in the water column. The question remained about the contribution of the consequent near-field particle motion influence from the tank walls, to the triggering of the trauma. Offshore noise controlled exposure experiments (CEE) on common cuttlefish (Sepia officinalis), were conducted at three different depths and distances from the source and particle motion and sound pressure measurements were performed at each location. Scanning electron microscopy (SEM) revealed injuries in statocysts, which severity was quantified and found to be proportional to the distance to the transducer. These findings are the first evidence of cephalopods sensitivity to anthropogenic noise sources in their natural habitat. From the measured received power spectrum of the sweep, it was possible to determine that the animals were exposed at levels ranging from 139 to 142 dB re 1 μPa2 and from 139 to 141 dB re 1 μPa2, at 1/3 octave bands centred at 315 Hz and 400 Hz, respectively. These results could therefore be considered a coherent threshold estimation of noise levels that can trigger acoustic trauma in cephalopods.

  8. Effects of Listening to Music versus Environmental Sounds in Passive and Active Situations on Levels of Pain and Fatigue in Fibromyalgia.

    PubMed

    Mercadíe, Lolita; Mick, Gérard; Guétin, Stéphane; Bigand, Emmanuel

    2015-10-01

    In fibromyalgia, pain symptoms such as hyperalgesia and allodynia are associated with fatigue. Mechanisms underlying such symptoms can be modulated by listening to pleasant music. We expected that listening to music, because of its emotional impact, would have a greater modulating effect on the perception of pain and fatigue in patients with fibromyalgia than listening to nonmusical sounds. To investigate this hypothesis, we carried out a 4-week study in which patients with fibromyalgia listened to either preselected musical pieces or environmental sounds when they experienced pain in active (while carrying out a physical activity) or passive (at rest) situations. Concomitant changes of pain and fatigue levels were evaluated. When patients listened to music or environmental sounds at rest, pain and fatigue levels were significantly reduced after 20 minutes of listening, with no difference of effect magnitude between the two stimuli. This improvement persisted 10 minutes after the end of the listening session. In active situations, pain did not increase in presence of the two stimuli. Contrary to our expectations, music and environmental sounds produced a similar relieving effect on pain and fatigue, with no benefit gained by listening to pleasant music over environmental sounds. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  9. The sound intensity and characteristics of variable-pitch pulse oximeters.

    PubMed

    Yamanaka, Hiroo; Haruna, Junichi; Mashimo, Takashi; Akita, Takeshi; Kinouchi, Keiko

    2008-06-01

    Various studies worldwide have found that sound levels in hospitals significantly exceed the World Health Organization (WHO) guidelines, and that this noise is associated with audible signals from various medical devices. The pulse oximeter is now widely used in health care; however the health effects associated with the noise from this equipment remain largely unclarified. Here, we analyzed the sounds of variable-pitch pulse oximeters, and discussed the possible associated risk of sleep disturbance, annoyance, and hearing loss. The Nellcor N 595 and Masimo SET Radical pulse oximeters were measured for equivalent continuous A-weighted sound pressure levels (L(Aeq)), loudness levels, and loudness. Pulse beep pitches were also identified using Fast Fourier Transform (FFT) analysis and compared with musical pitches as controls. Almost all alarm sounds and pulse beeps from the instruments tested exceeded 30 dBA, a level that may induce sleep disturbance and annoyance. Several alarm sounds emitted by the pulse oximeters exceeded 70 dBA, which is known to induce hearing loss. The loudness of the alarm sound of each pulse oximeter did not change in proportion to the sound volume level. The pitch of each pulse beep did not correspond to musical pitch levels. The results indicate that sounds from pulse oximeters pose a potential risk of not only sleep disturbance and annoyance but also hearing loss, and that these sounds are unnatural for human auditory perception.

  10. Effect of gentamicin and levels of ambient sound on hearing screening outcomes in the neonatal intensive care unit: A pilot study.

    PubMed

    Garinis, Angela C; Liao, Selena; Cross, Campbell P; Galati, Johnathan; Middaugh, Jessica L; Mace, Jess C; Wood, Anna-Marie; McEvoy, Lindsey; Moneta, Lauren; Lubianski, Troy; Coopersmith, Noe; Vigo, Nicholas; Hart, Christopher; Riddle, Artur; Ettinger, Olivia; Nold, Casey; Durham, Heather; MacArthur, Carol; McEvoy, Cynthia; Steyger, Peter S

    2017-06-01

    Hearing loss rates in infants admitted to neonatal intensive care units (NICU) run at 2-15%, compared to 0.3% in full-term births. The etiology of this difference remains poorly understood. We examined whether the level of ambient sound and/or cumulative gentamicin (an aminoglycoside) exposure affect NICU hearing screening results, as either exposure can cause acquired, permanent hearing loss. We hypothesized that higher levels of ambient sound in the NICU, and/or gentamicin dosing, increase the risk of referral on the distortion product otoacoustic emission (DPOAE) assessments and/or automated auditory brainstem response (AABR) screens. This was a prospective pilot outcomes study of 82 infants (<37 weeks gestational age) admitted to the NICU at Oregon Health & Science University. An ER-200D sound pressure level dosimeter was used to collect daily sound exposure in the NICU for each neonate. Gentamicin dosing was also calculated for each infant, including the total daily dose based on body mass (mg/kg/day), as well as the total number of treatment days. DPOAE and AABR assessments were conducted prior to discharge to evaluate hearing status. Exclusion criteria included congenital infections associated with hearing loss, and congenital craniofacial or otologic abnormalities. The mean level of ambient sound was 62.9 dBA (range 51.8-70.6 dBA), greatly exceeding American Academy of Pediatrics (AAP) recommendation of <45.0 dBA. More than 80% of subjects received gentamicin treatment. The referral rate for (i) AABRs, (frequency range: ∼1000-4000 Hz), was 5%; (ii) DPOAEs with a broad F2 frequency range (2063-10031 Hz) was 39%; (iii) DPOAEs with a low-frequency F2 range (<4172 Hz) was 29%, and (iv) DPOAEs with a high-frequency F2 range (>4172 Hz) was 44%. DPOAE referrals were significantly greater for infants receiving >2 days of gentamicin dosing compared to fewer doses (p = 0.004). The effect of sound exposure and gentamicin treatment on hearing could not be

  11. Effect of Gentamicin and Levels of Ambient Sound on Hearing Screening Outcomes in the Neonatal Intensive Care Unit: A Pilot Study

    PubMed Central

    Garinis, Angela C.; Liao, Selena; Cross, Campbell P.; Galati, Johnathan; Middaugh, Jessica L.; Mace, Jess C.; Wood, Anna-Marie; McEvoy, Lindsey; Moneta, Lauren; Lubianski, Troy; Coopersmith, Noe; Vigo, Nicholas; Hart, Christopher; Riddle, Artur; Ettinger, Olivia; Nold, Casey; Durham, Heather; MacArthur, Carol; McEvoy, Cynthia; Steyger, Peter S.

    2017-01-01

    Objective Hearing loss rates in infants admitted to neonatal intensive care units (NICU) run at 2–15%, compared to 0.3% in full-term births. The etiology of this difference remains poorly understood. We examined whether the level of ambient sound and/or cumulative gentamicin (an aminoglycoside) exposure affect NICU hearing screening results, as either exposure can cause acquired, permanent hearing loss. We hypothesized that higher levels of ambient sound in the NICU, and/or gentamicin dosing, increase the risk of referral on the distortion product otoacoustic emission (DPOAE) assessments and/or automated auditory brainstem response (AABR) screens. Methods This was a prospective pilot outcomes study of 82 infants (<37 weeks gestational age) admitted to the NICU at Oregon Health & Science University. An ER-200D sound pressure level dosimeter was used to collect daily sound exposure in the NICU for each neonate. Gentamicin dosing was also calculated for each infant, including the total daily dose based on body mass (mg/kg/day), as well as the total number of treatment days. DPOAE and AABR assessments were conducted prior to discharge to evaluate hearing status. Exclusion criteria included congenital infections associated with hearing loss, and congenital craniofacial or otologic abnormalities. Results The mean level of ambient sound was 62.9 dBA (range 51.8–70.6 dBA), greatly exceeding American Academy of Pediatrics (AAP) recommendation of <45.0 dBA. More than 80% of subjects received gentamicin treatment. The referral rate for (i) AABRs, (frequency range: ~1000–4000 Hz), was 5%; (ii) DPOAEs with a broad F2 frequency range (2063–10031 Hz) was 39%; (iii) DPOAEs with a low-frequency F2 range (<4172 Hz) was 29%, and (iv) DPOAEs with a high-frequency F2 range (>4172 Hz) was 44%. DPOAE referrals were significantly greater for infants receiving >2 days of gentamicin dosing compared to fewer doses (p= 0.004). The effect of sound exposure and gentamicin treatment on

  12. On the efficacy of spatial sampling using manual scanning paths to determine the spatial average sound pressure level in rooms.

    PubMed

    Hopkins, Carl

    2011-05-01

    In architectural acoustics, noise control and environmental noise, there are often steady-state signals for which it is necessary to measure the spatial average, sound pressure level inside rooms. This requires using fixed microphone positions, mechanical scanning devices, or manual scanning. In comparison with mechanical scanning devices, the human body allows manual scanning to trace out complex geometrical paths in three-dimensional space. To determine the efficacy of manual scanning paths in terms of an equivalent number of uncorrelated samples, an analytical approach is solved numerically. The benchmark used to assess these paths is a minimum of five uncorrelated fixed microphone positions at frequencies above 200 Hz. For paths involving an operator walking across the room, potential problems exist with walking noise and non-uniform scanning speeds. Hence, paths are considered based on a fixed standing position or rotation of the body about a fixed point. In empty rooms, it is shown that a circle, helix, or cylindrical-type path satisfy the benchmark requirement with the latter two paths being highly efficient at generating large number of uncorrelated samples. In furnished rooms where there is limited space for the operator to move, an efficient path comprises three semicircles with 45°-60° separations.

  13. Intentional changes in sound pressure level and rate: their impact on measures of respiration, phonation, and articulation.

    PubMed

    Dromey, C; Ramig, L O

    1998-10-01

    The purpose of the study was to compare the effects of changing sound pressure level (SPL) and rate on respiratory, phonatory, and articulatory behavior during sentence production. Ten subjects, 5 men and 5 women, repeated the sentence, "I sell a sapapple again," under 5 SPL and 5 rate conditions. From a multi-channel recording, measures were made of lung volume (LV), SPL, fundamental frequency (F0), semitone standard deviation (STSD), and upper and lower lip displacements and peak velocities. Loud speech led to increases in LV initiation, LV termination, F0, STSD, and articulatory displacements and peak velocities for both lips. Token-to-token variability in these articulatory measures generally decreased as SPL increased, whereas rate increases were associated with increased lip movement variability. LV excursion decreased as rate increased. F0 for the men and STSD for both genders increased with rate. Lower lip displacements became smaller for faster speech. The interspeaker differences in velocity change as a function of rate contrasted with the more consistent velocity performance across speakers for changes in SPL. Because SPL and rate change are targeted in therapy for dysarthria, the present data suggest directions for future research with disordered speakers.

  14. Contributions of Morphological Awareness Skills to Word-Level Reading and Spelling in First-Grade Children with and without Speech Sound Disorder

    ERIC Educational Resources Information Center

    Apel, Kenn; Lawrence, Jessika

    2011-01-01

    Purpose: In this study, the authors compared the morphological awareness abilities of children with speech sound disorder (SSD) and children with typical speech skills and examined how morphological awareness ability predicted word-level reading and spelling performance above other known contributors to literacy development. Method: Eighty-eight…

  15. Integrated Advanced Microwave Sounding Unit-A (AMSU-A) METOP Stress Analysis Report (Qual Level Random Vibration) A1 Module

    NASA Technical Reports Server (NTRS)

    Mehitretter, R.

    1996-01-01

    Stress analysis of the primary structure of the Meteorological Satellites Project (METSAT) Advanced Microwave Sounding Units-A, A1 Module performed using the Meteorological Operational (METOP) Qualification Level 9.66 grms Random Vibration PSD Spectrum is presented. The random vibration structural margins of safety and natural frequency predictions are summarized.

  16. Respiratory Muscle Strength, Sound Pressure Level, and Vocal Acoustic Parameters and Waist Circumference of Children With Different Nutritional Status.

    PubMed

    Pascotini, Fernanda dos Santos; Ribeiro, Vanessa Veis; Christmann, Mara Keli; Tomasi, Lidia Lis; Dellazzana, Amanda Alves; Haeffner, Leris Salete Bonfanti; Cielo, Carla Aparecida

    2016-01-01

    Relate respiratory muscle strength (RMS), sound pressure (SP) level, and vocal acoustic parameters to the abdominal circumference (AC) and nutritional status of children. This is a cross-sectional study. Eighty-two school children aged between 8 and 10 years, grouped by nutritional states (eutrophic, overweight, or obese) and AC percentile (≤25, 25-75, and ≥75), were included in the study. Evaluations of maximal inspiratory pressure (IPmax) and maximal expiratory pressure (EPmax) were conducted using the manometer and SP and acoustic parameters through the Multi-Dimensional Voice Program Advanced (KayPENTAX, Montvale, New Jersey). There were significant differences (P < 0.05) in the EPmax of children with AC between the 25th and 75th percentiles (72.4) and those less than or equal to the 25th percentile (61.9) and in the SP of those greater than or equal to the 75th percentile (73.4) and less than or equal to the 25th percentile (66.6). The IPmax, EPmax, SP levels, and acoustic variables were not different in relation to the nutritional states of the children. There was a strong and positive correlation between the coefficient of amplitude perturbations (shimmer), the harmonics-to-noise ratio and the variation of the fundamental frequency, respectively, 0.79 and 0.71. RMS and acoustic voice characteristics in children do not appear to be influenced by nutritional states, and respiratory pressure does not interfere with acoustic voice characteristics. However, localized fat, represented by the AC, alters the EPmax and the SP, each of which increases as the AC increases. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Body weight maintenance and levels of mutans streptococci and lactobacilli in a group of Swedish women seven years after completion of a Weight Watchers' diet.

    PubMed

    Köhler, Birgitta; Andreén, Ingrid

    2011-01-01

    The long-term effect of the WW programme on weight and oral cariogenic bacteria was evaluated after 7 yr. All WW who completed the 8-wk dietary regimen in an earlier study (n=33) and the persons in the reference group (REF) (n=27) were invited to participate. The salivary secretion rate, numbers of mutans streptococci (MS) and lactobacilli (lbc) were determined. The WW were weighed. Sustaining a 5% weight loss from the initial weight was regarded as successful weight maintenance. An interview according to a standardised questionnaire was conducted on medication,the intake of antimicrobial agents, dietary changes and experience of dental caries during the last 7 yr. 25 WW and 21 REF qualified to participate. On a group basis, weight, salivary MS and lbc displayed pre-diet levels after 7yr. 15 of the WW (60%) were below their initial weight. Successful weight maintenance was achieved by 32%. Reported changes in the intake of fat-rich products differed significantly between the WW and the REF. Nine WW reported fewer carious lesions after joining the WW. Ninety per cent of REF did not regard caries as a problem. Comparisons of pre- and post-diet data and 7 yr data indicated short-term compliance and varying outcome in terms of long-term compliance. No association was found between salivary levels of bacteria and long-term weight maintenance on a group basis. However,further well-designed longitudinal studies are required to confirm whether salivary MS could be used on an individual basis to validate reported sucrose intake in a dietary regimen.

  18. Assessment and evaluation of noise controls on roof bolting equipment and a method for predicting sound pressure levels in underground coal mining

    NASA Astrophysics Data System (ADS)

    Matetic, Rudy J.

    Over-exposure to noise remains a widespread and serious health hazard in the U.S. mining industries despite 25 years of regulation. Every day, 80% of the nation's miners go to work in an environment where the time weighted average (TWA) noise level exceeds 85 dBA and more than 25% of the miners are exposed to a TWA noise level that exceeds 90 dBA, the permissible exposure limit (PEL). Additionally, MSHA coal noise sample data collected from 2000 to 2002 show that 65% of the equipment whose operators exceeded 100% noise dosage comprise only seven different types of machines; auger miners, bulldozers, continuous miners, front end loaders, roof bolters, shuttle cars (electric), and trucks. In addition, the MSHA data indicate that the roof bolter is third among all the equipment and second among equipment in underground coal whose operators exceed 100% dosage. A research program was implemented to: (1) determine, characterize and to measure sound power levels radiated by a roof bolting machine during differing drilling configurations (thrust, rotational speed, penetration rate, etc.) and utilizing differing types of drilling methods in high compressive strength rock media (>20,000 psi). The research approach characterized the sound power level results from laboratory testing and provided the mining industry with empirical data relative to utilizing differing noise control technologies (drilling configurations and types of drilling methods) in reducing sound power level emissions on a roof bolting machine; (2) distinguish and correlate the empirical data into one, statistically valid, equation, in which, provided the mining industry with a tool to predict overall sound power levels of a roof bolting machine given any type of drilling configuration and drilling method utilized in industry; (3) provided the mining industry with several approaches to predict or determine sound pressure levels in an underground coal mine utilizing laboratory test results from a roof bolting

  19. Differential effects of suppressors on hazardous sound pressure levels generated by AR-15 rifles: Considerations for recreational shooters, law enforcement, and the military.

    PubMed

    Lobarinas, Edward; Scott, Ryan; Spankovich, Christopher; Le Prell, Colleen G

    2016-01-01

    Firearm discharges produce hazardous levels of impulse noise that can lead to permanent hearing loss. In the present study, we evaluated the effects of suppression, ammunition, and barrel length on AR-15 rifles. Sound levels were measured left/right of a user's head, and 1-m left of the muzzle, per MIL-STD-1474-D, under both unsuppressed and suppressed conditions. Nine commercially available AR-15 rifles and 14 suppressors were used. Suppressors significantly decreased peak dB SPL at the 1-m location and the left ear location. However, under most rifle/ammunition conditions, levels remained above 140 dB peak SPL near a user's right ear. In a subset of conditions, subsonic ammunition produced values near or below 140 dB peak SPL. Overall suppression ranged from 7-32 dB across conditions. These data indicate that (1) suppressors reduce discharge levels to 140 dB peak SPL or below in only a subset of AR-15 conditions, (2) shorter barrel length and use of muzzle brake devices can substantially increase exposure level for the user, and (3) there are significant left/right ear sound pressure differences under suppressed conditions as a function of the AR-15 direct impingement design that must be considered during sound measurements to fully evaluate overall efficacy.

  20. Equivalent threshold sound pressure levels (ETSPL) for Sennheiser HDA 280 supra-aural audiometric earphones in the frequency range 125 Hz to 8000 Hz.

    PubMed

    Poulsen, Torben; Oakley, Sebastian

    2009-05-01

    Hearing threshold sound pressure levels were measured for the Sennheiser HDA 280 audiometric earphone. Hearing thresholds were measured for 25 normal-hearing test subjects at the 11 audiometric test frequencies from 125 Hz to 8000 Hz. Sennheiser HDA 280 is a supra-aural earphone that may be seen as a substitute for the classical Telephonics TDH 39. The results are given as the equivalent threshold sound pressure level (ETSPL) measured in an acoustic coupler specified in IEC 60318-3. The results are in good agreement with an independent investigation from PTB, Braunschweig, Germany. From acoustic laboratory measurements ETSPL values are calculated for the ear simulator specified in IEC 60318-1. Fitting of earphone and coupler is discussed. The data may be used for a future update of the RETSPL standard for supra-aural audiometric earphones, ISO 389-1.

  1. Sound Absorbers

    NASA Astrophysics Data System (ADS)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  2. Annoyance caused by the sounds of a magnetic levitation train.

    PubMed

    Vos, Joos

    2004-04-01

    In a laboratory study, the annoyance caused by the passby sounds from a magnetic levitation (maglev) train was investigated. The listeners were presented with various sound fragments. The task of the listeners was to respond after each presentation to the question: "How annoying would you find the sound in the preceding period if you were exposed to it at home on a regular basis?" The independent variables were (a) the driving speed of the maglev train (varying from 100 to 400 km/h), (b) the outdoor A-weighted sound exposure level (ASEL) of the passbys (varying from 65 to 90 dB), and (c) the simulated outdoor-to-indoor reduction in sound level (windows open or windows closed). As references to the passby sounds from the maglev train (type Transrapid 08), sounds from road traffic (passenger cars and trucks) and more conventional railway (intercity trains) were included for rating also. Four important results were obtained. Provided that the outdoor ASELs were the same, (1) the annoyance was independent of the driving speed of the maglev train, (2) the annoyance caused by the maglev train was considerably higher than that caused by the intercity train, (3) the annoyance caused by the maglev train was hardly different from that caused by road traffic, and (4) the results (1)-(3) held true both for open or closed windows. On the basis of the present results, it might be expected that the sounds are equally annoying if the ASELs of the maglev-train passbys are at least 5 dB lower than those of the intercity train passbys. Consequently, the results of the present experiment do not support application of a railway bonus to the maglev-train sounds.

  3. Annoyance caused by the sounds of a magnetic levitation train

    NASA Astrophysics Data System (ADS)

    Vos, Joos

    2004-04-01

    In a laboratory study, the annoyance caused by the passby sounds from a magnetic levitation (maglev) train was investigated. The listeners were presented with various sound fragments. The task of the listeners was to respond after each presentation to the question: ``How annoying would you find the sound in the preceding period if you were exposed to it at home on a regular basis?'' The independent variables were (a) the driving speed of the maglev train (varying from 100 to 400 km/h), (b) the outdoor A-weighted sound exposure level (ASEL) of the passbys (varying from 65 to 90 dB), and (c) the simulated outdoor-to-indoor reduction in sound level (windows open or windows closed). As references to the passby sounds from the maglev train (type Transrapid 08), sounds from road traffic (passenger cars and trucks) and more conventional railway (intercity trains) were included for rating also. Four important results were obtained. Provided that the outdoor ASELs were the same, (1) the annoyance was independent of the driving speed of the maglev train, (2) the annoyance caused by the maglev train was considerably higher than that caused by the intercity train, (3) the annoyance caused by the maglev train was hardly different from that caused by road traffic, and (4) the results (1)-(3) held true both for open or closed windows. On the basis of the present results, it might be expected that the sounds are equally annoying if the ASELs of the maglev-train passbys are at least 5 dB lower than those of the intercity train passbys. Consequently, the results of the present experiment do not support application of a railway bonus to the maglev-train sounds.

  4. Repeated Measurement of Absolute and Relative Judgments of Loudness: Clinical Relevance for Prescriptive Fitting of Aided Target Gains for soft, Comfortable, and Loud, But Ok Sound Levels.

    PubMed

    Formby, Craig; Payne, JoAnne; Yang, Xin; Wu, Delphanie; Parton, Jason M

    2017-02-01

    This study was undertaken with the purpose of streamlining clinical measures of loudness growth to facilitate and enhance prescriptive fitting of nonlinear hearing aids. Repeated measures of loudness at 500 and 3,000 Hz were obtained bilaterally at monthly intervals over a 6-month period from three groups of young adult listeners. All volunteers had normal audiometric hearing sensitivity and middle ear function, and all denied problems related to sound tolerance. Group 1 performed judgments of soft and loud, but OK for presentation of ascending sound levels. We defined these judgments operationally as absolute judgments of loudness. Group 2 initially performed loudness judgments across a continuum of seven loudness categories ranging from judgments of very soft to uncomfortably loud for presentation of ascending sound levels per the Contour Test of Loudness; we defined these judgments as relative judgments of loudness. In the same session, they then performed the absolute judgments for soft and loud, but OK sound levels. Group 3 performed the same set of loudness judgments as did group 2, but the task order was reversed such that they performed the absolute judgments initially within each test session followed by the relative judgments. The key findings from this study were as follows: (1) Within group, the absolute and relative tasks yielded clinically similar judgments for soft and for loud, but OK sound levels. These judgments were largely independent of task order, ear, frequency, or trial order within a given session. (2) Loudness judgments increased, on average, by ∼3 dB between the first and last test session, which is consistent with the commonly reported acclimatization effect reported for incremental changes in loudness discomfort levels as a consequence of chronic bilateral hearing aid use. (3) Measured and predicted comfortable judgments of loudness were in good agreement for the individual listener and for groups of listeners. These comfortable

  5. Repeated Measurement of Absolute and Relative Judgments of Loudness: Clinical Relevance for Prescriptive Fitting of Aided Target Gains for soft, Comfortable, and Loud, But Ok Sound Levels

    PubMed Central

    Formby, Craig; Payne, JoAnne; Yang, Xin; Wu, Delphanie; Parton, Jason M.

    2017-01-01

    This study was undertaken with the purpose of streamlining clinical measures of loudness growth to facilitate and enhance prescriptive fitting of nonlinear hearing aids. Repeated measures of loudness at 500 and 3,000 Hz were obtained bilaterally at monthly intervals over a 6-month period from three groups of young adult listeners. All volunteers had normal audiometric hearing sensitivity and middle ear function, and all denied problems related to sound tolerance. Group 1 performed judgments of soft and loud, but OK for presentation of ascending sound levels. We defined these judgments operationally as absolute judgments of loudness. Group 2 initially performed loudness judgments across a continuum of seven loudness categories ranging from judgments of very soft to uncomfortably loud for presentation of ascending sound levels per the Contour Test of Loudness; we defined these judgments as relative judgments of loudness. In the same session, they then performed the absolute judgments for soft and loud, but OK sound levels. Group 3 performed the same set of loudness judgments as did group 2, but the task order was reversed such that they performed the absolute judgments initially within each test session followed by the relative judgments. The key findings from this study were as follows: (1) Within group, the absolute and relative tasks yielded clinically similar judgments for soft and for loud, but OK sound levels. These judgments were largely independent of task order, ear, frequency, or trial order within a given session. (2) Loudness judgments increased, on average, by ∼3 dB between the first and last test session, which is consistent with the commonly reported acclimatization effect reported for incremental changes in loudness discomfort levels as a consequence of chronic bilateral hearing aid use. (3) Measured and predicted comfortable judgments of loudness were in good agreement for the individual listener and for groups of listeners. These comfortable

  6. More than 100 Years of Background-Level Sedimentary Metals, Nisqually River Delta, South Puget Sound, Washington

    USGS Publications Warehouse

    Takesue, Renee K.; Swarzenski, Peter W.

    2011-01-01

    The Nisqually River Delta is located about 25 km south of the Tacoma Narrows in the southern reach of Puget Sound. Delta evolution is controlled by sedimentation from the Nisqually River and erosion by strong tidal currents that may reach 0.95 m/s in the Nisqually Reach. The Nisqually River flows 116 km from the Cascade Range, including the slopes of Mount Rainier, through glacially carved valleys to Puget Sound. Extensive tidal flats on the delta consist of late-Holocene silty and sandy strata from normal river streamflow and seasonal floods and possibly from distal sediment-rich debris flows associated with volcanic and seismic events. In the early 1900s, dikes and levees were constructed around Nisqually Delta salt marshes, and the reclaimed land was used for agriculture and pasture. In 1974, U.S. Fish and Wildlife Service established the Nisqually National Wildlife Refuge on the reclaimed land to protect migratory birds; its creation has prevented further human alteration of the Delta and estuary. In October 2009, original dikes and levees were removed to restore tidal exchange to almost 3 km2 of man-made freshwater marsh on the Nisqually Delta.

  7. Abdominal sounds

    MedlinePlus

    ... intestines, or strangulation of the bowel and death ( necrosis ) of the bowel tissue. Very high-pitched bowel ... missing bowel sounds may be caused by: Blocked blood vessels prevent the intestines from getting proper blood flow. ...

  8. Effect of sea level changes on the Quaternary emergent reef limestone near Sharm Abhur as revealed from geoelectrical soundings

    NASA Astrophysics Data System (ADS)

    El-Abd, Yakout; Awad, Morad

    The present paper deals with the study of shallow geological formations existing near Sharm Abhur at the Red Sea coast of Saudi Arabia. Fourteen vertical resistivity soundings (VES) have been conducted at different locations on both sides of the Sharm. Analysis of the data, together with the information obtained from three bore holes drilled in the area, were used to construct pseudo-, as well as true, resistivity sections. These were taken along and across the Sharm trend. The results show that the coralline limestone formation in the coastal plain near Sharm Abhur is subject to subsurface erosion and sea water invasion resulting in different layers of secondary product or diagenetically altered coralline limestone. Seepage of saline water through the Sharm basin was recognized. The basal layer feature along the sections is conformable with the general slope of the Sharm bottom.

  9. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: AMSU-A1 METSAT Instrument (S/N 105) Qualification, Level Vibration Tests of December 1998 (S/O 605445, OC-419)

    NASA Technical Reports Server (NTRS)

    Heffner, R. J.

    1998-01-01

    This is the Engineering Test Report, AMSU-AL METSAT Instrument (S/N 105) Qualification Level Vibration Tests of December 1998 (S/0 605445, OC-419), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  10. Sound Guard

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Lubrication technology originally developed for a series of NASA satellites has produced a commercial product for protecting the sound fidelity of phonograph records. Called Sound Guard, the preservative is a spray-on fluid that deposits a microscopically thin protective coating which reduces friction and prevents the hard diamond stylus from wearing away the softer vinyl material of the disc. It is marketed by the Consumer Products Division of Ball Corporation, Muncie, Indiana. The lubricant technology on which Sound Guard is based originated with NASA's Orbiting Solar Observatory (OSO), an Earth-orbiting satellite designed and built by Ball Brothers Research Corporation, Boulder, Colorado, also a division of Ball Corporation. Ball Brothers engineers found a problem early in the OSO program: known lubricants were unsuitable for use on satellite moving parts that would be exposed to the vacuum of space for several months. So the company conducted research on the properties of materials needed for long life in space and developed new lubricants. They worked successfully on seven OSO flights and attracted considerable attention among other aerospace contractors. Ball Brothers now supplies its "Vac Kote" lubricants and coatings to both aerospace and non-aerospace industries and the company has produced several hundred variations of the original technology. Ball Corporation expanded its product line to include consumer products, of which Sound Guard is one of the most recent. In addition to protecting record grooves, Sound Guard's anti-static quality also retards particle accumulation on the stylus. During comparison study by a leading U.S. electronic laboratory, a record not treated by Sound Guard had to be cleaned after 50 plays and the stylus had collected a considerable number of small vinyl particles. The Sound Guard-treated disc was still clean after 100 plays, as was its stylus.

  11. Effects of nocturnal railway noise on sleep fragmentation in young and middle-aged subjects as a function of type of train and sound level.

    PubMed

    Saremi, Mahnaz; Grenèche, Jérôme; Bonnefond, Anne; Rohmer, Odile; Eschenlauer, Arnaud; Tassi, Patricia

    2008-12-01

    Due to undisputable effects of noise on sleep structure, especially in terms of sleep fragmentation, the expected development of railway transportation in the next few years might represent a potential risk factor for people living alongside the rail tracks. The aim of this study was to compare the effects of different types of train (freight, automotive, passenger) on arousal from sleep and to determine any differential impact as a function of sound level and age. Twenty young (16 women, 4 men; 25.8 years+/-2.6) and 18 middle-aged (15 women, 3 men; 52.2 years+/-2.5) healthy subjects participated in three whole-night polysomnographic recordings including one control night (35 dBA), and two noisy nights with equivalent noise levels of 40 or 50 dB(A), respectively. Arousal responsiveness increased with sound level. It was the highest in S2 and the lowest in REM sleep. Micro-arousals (3-10 s) occurred at a rate of 25-30%, irrespective of the type of train. Awakenings (>10 s) were produced more frequently by freight train than by automotive and passenger trains. Normal age-related changes in sleep were observed, but they were not aggravated by railway noise, thus questioning whether older persons are less sensitive to noise during sleep. These evidences led to the conclusion that microscopic detection of sleep fragmentation may provide advantageous information on sleep disturbances caused by environmental noises.

  12. Practical ranges of loudness levels of various types of environmental noise, including traffic noise, aircraft noise, and industrial noise.

    PubMed

    Salomons, Erik M; Janssen, Sabine A

    2011-06-01

    In environmental noise control one commonly employs the A-weighted sound level as an approximate measure of the effect of noise on people. A measure that is more closely related to direct human perception of noise is the loudness level. At constant A-weighted sound level, the loudness level of a noise signal varies considerably with the shape of the frequency spectrum of the noise signal. In particular the bandwidth of the spectrum has a large effect on the loudness level, due to the effect of critical bands in the human hearing system. The low-frequency content of the spectrum also has an effect on the loudness level. In this note the relation between loudness level and A-weighted sound level is analyzed for various environmental noise spectra, including spectra of traffic noise, aircraft noise, and industrial noise. From loudness levels calculated for these environmental noise spectra, diagrams are constructed that show the relation between loudness level, A-weighted sound level, and shape of the spectrum. The diagrams show that the upper limits of the loudness level for broadband environmental noise spectra are about 20 to 40 phon higher than the lower limits for narrowband spectra, which correspond to the loudness levels of pure tones. The diagrams are useful for assessing limitations and potential improvements of environmental noise control methods and policy based on A-weighted sound levels.

  13. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    PubMed

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions

  14. Sound Solutions

    ERIC Educational Resources Information Center

    Starkman, Neal

    2007-01-01

    Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…

  15. Soft-talker: a sound level monitor for the hard-of-hearing using an improved tactile transducer.

    PubMed

    Walker, J R; Fenn, G; Smith, B Z

    1987-04-01

    We describe a small wearable device which enables deaf people to monitor the volume of their voices; it consists of a microphone, amplifier, signal rectifier, smoothing and a level detector connected to a wrist-worn vibrator, and provides vibrotactile feedback of voice level.

  16. Measuring Sound-Processor Threshold Levels for Pediatric Cochlear Implant Recipients Using Conditioned Play Audiometry via Telepractice

    ERIC Educational Resources Information Center

    Goehring, Jenny L.; Hughes, Michelle L.

    2017-01-01

    Purpose: This study evaluated the use of telepractice for measuring cochlear implant (CI) behavioral threshold (T) levels in children using conditioned play audiometry (CPA). The goals were to determine whether (a) T levels measured via telepractice were not significantly different from those obtained in person, (b) response probability differed…

  17. Sounding the warning bells: the need for a systems approach to understanding behaviour at rail level crossings.

    PubMed

    Read, Gemma J M; Salmon, Paul M; Lenné, Michael G

    2013-09-01

    Collisions at rail level crossings are an international safety concern and have been the subject of considerable research effort. Modern human factors practice advocates a systems approach to investigating safety issues in complex systems. This paper describes the results of a structured review of the level crossing literature to determine the extent to which a systems approach has been applied. The measures used to determine if previous research was underpinned by a systems approach were: the type of analysis method utilised, the number of component relationships considered, the number of user groups considered, the number of system levels considered and the type of model described in the research. None of research reviewed was found to be consistent with a systems approach. It is recommended that further research utilise a systems approach to the study of the level crossing system to enable the identification of effective design improvements. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. A comparison with theory of peak to peak sound level for a model helicopter rotor generating blade slap at low tip speeds

    NASA Technical Reports Server (NTRS)

    Fontana, R. R.; Hubbard, J. E., Jr.

    1983-01-01

    Mini-tuft and smoke flow visualization techniques have been developed for the investigation of model helicopter rotor blade vortex interaction noise at low tip speeds. These techniques allow the parameters required for calculation of the blade vortex interaction noise using the Widnall/Wolf model to be determined. The measured acoustics are compared with the predicted acoustics for each test condition. Under the conditions tested it is determined that the dominating acoustic pulse results from the interaction of the blade with a vortex 1-1/4 revolutions old at an interaction angle of less than 8 deg. The Widnall/Wolf model predicts the peak sound pressure level within 3 dB for blade vortex separation distances greater than 1 semichord, but it generally over predicts the peak S.P.L. by over 10 dB for blade vortex separation distances of less than 1/4 semichord.

  19. Sound Standards for Schools "Unsound."

    ERIC Educational Resources Information Center

    Davis, Don

    2002-01-01

    Criticizes new classroom sound standard proposed by the American National Standards Institute that sets maximum background sound level at 35 decibels (described as "a whisper at 2 meters"). Argues that new standard is too costly for schools to implement, is not recommended by the medical community, and cannot be achieved by construction…

  20. THE EFFECTS ON LEARNING FROM A MOTION PICTURE FILM OF SELECTIVE CHANGES IN SOUND TRACK LOUDNESS LEVEL. FINAL REPORT.

    ERIC Educational Resources Information Center

    MOAKLEY, FRANCIS X.

    EFFECTS OF PERIODIC VARIATIONS IN AN INSTRUCTIONAL FILM'S NORMAL LOUDNESS LEVEL FOR RELEVANT AND IRRELEVANT FILM SEQUENCES WERE MEASURED BY A MULTIPLE CHOICE TEST. RIGOROUS PILOT STUDIES, RANDOM GROUPING OF SEVENTH GRADERS FOR TREATMENTS, AND RATINGS OF RELEVANT AND IRRELEVANT PORTIONS OF THE FILM BY AN UNSPECIFIED NUMBER OF JUDGES PRECEDED THE…

  1. Measuring Sound-Processor Threshold Levels for Pediatric Cochlear Implant Recipients Using Conditioned Play Audiometry via Telepractice

    PubMed Central

    Goehring, Jenny L.

    2017-01-01

    Purpose This study evaluated the use of telepractice for measuring cochlear implant (CI) behavioral threshold (T) levels in children using conditioned play audiometry (CPA). The goals were to determine whether (a) T levels measured via telepractice were not significantly different from those obtained in person, (b) response probability differed between remote and in-person conditions, and (c) the remote visit required more time than the in-person condition. Method An ABBA design (A, in-person; B, remote) was split across 2 visits. Nineteen children aged 2.6–7.1 years participated. T levels were measured using CPA for 3 electrodes per session. A “hit” rate was calculated to determine whether the likelihood of obtaining responses differed between conditions. Test time was compared across conditions. A questionnaire was administered to assess parent/caregiver attitudes about telepractice. Results Results indicated no significant difference in T levels between conditions. Hit rates were not significantly different between in-person and remote conditions (98% vs. 97%, respectively). Test time was similar between conditions. Questionnaire results revealed that 100% of caregivers would use telepractice for CI appointments either some or all of the time. Conclusion Telepractice is a viable option for routine pediatric programming appointments for children using CPA to set behavioral thresholds. PMID:28257529

  2. Interpolated Sounding and Gridded Sounding Value-Added Products

    SciTech Connect

    Toto, T.; Jensen, M.

    Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less

  3. Monaural sound localization revisited.

    PubMed

    Wightman, F L; Kistler, D J

    1997-02-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  4. Monaural Sound Localization Revisited

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1997-01-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  5. Radiometric sounding system

    SciTech Connect

    Whiteman, C.D.; Anderson, G.A.; Alzheimer, J.M.

    1995-04-01

    Vertical profiles of solar and terrestrial radiative fluxes are key research needs for global climate change research. These fluxes are expected to change as radiatively active trace gases are emitted to the earth`s atmosphere as a consequence of energy production and industrial and other human activities. Models suggest that changes in the concentration of such gases will lead to radiative flux divergences that will produce global warming of the earth`s atmosphere. Direct measurements of the vertical variation of solar and terrestrial radiative fluxes that lead to these flux divergences have been largely unavailable because of the expense of making suchmore » measurements from airplanes. These measurements are needed to improve existing atmospheric radiative transfer models, especially under the cloudy conditions where the models have not been adequately tested. A tethered-balloon-borne Radiometric Sounding System has been developed at Pacific Northwest Laboratory to provide an inexpensive means of making routine vertical soundings of radiative fluxes in the earth`s atmospheric boundary layer to altitudes up to 1500 m above ground level. Such vertical soundings would supplement measurements being made from aircraft and towers. The key technical challenge in the design of the Radiometric Sounding System is to develop a means of keeping the radiometers horizontal while the balloon ascends and descends in a turbulent atmospheric environment. This problem has been addressed by stabilizing a triangular radiometer-carrying platform that is carried on the tetherline of a balloon sounding system. The platform, carried 30 m or more below the balloon to reduce the balloon`s effect on the radiometric measurements, is leveled by two automatic control loops that activate motors, gears and pulleys when the platform is off-level. The sensitivity of the automatic control loops to oscillatory motions of various frequencies and amplitudes can be adjusted using filters.« less

  6. Validation of Sea levels from coastal altimetry waveform retracking expert system: a case study around the Prince William Sound in Alaska

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Deng, X.; Idris, N. H.

    2017-05-01

    This paper presents the validation of Coastal Altimetry Waveform Retracking Expert System (CAWRES), a novel method to optimize the Jason satellite altimetric sea levels from multiple retracking solutions. The validation is conducted over the region of Prince William Sound in Alaska, USA, where altimetric waveforms are perturbed by emerged land and sea states. Validation is performed in twofold. First, comparison with existing retrackers (i.e. MLE4 and Ice) from the Sensor Geophysical Data Records (SGDR), and second, comparison with in-situ tide gauge data. From the first validation assessment, in general, CAWRES outperforms the MLE4 and Ice retrackers. In 4 out of 6 cases, the value of improvement percentage (standard deviation of difference) is higher (lower) than those of the SGDR retrackers. CAWRES also presents the best performance in producing valid observations, and has the lowest noise when compared to the SGDR retrackers. From the second assessment with tide gauge, CAWRES retracked sea level anomalies (SLAs) are consistent with those of the tide gauge. The accuracy of CAWRES retracked SLAs is slightly better than those of the MLE4. However, the performance of Ice retracker is better than those of CAWRES and MLE4, suggesting the empirical-based retracker is more effective. The results demonstrate that the CAWRES would have potential to be applied to coastal regions elsewhere.

  7. Average ambulatory measures of sound pressure level, fundamental frequency, and vocal dose do not differ between adult females with phonotraumatic lesions and matched control subjects

    PubMed Central

    Van Stan, Jarrad H.; Mehta, Daryush D.; Zeitels, Steven M.; Burns, James A.; Barbu, Anca M.; Hillman, Robert E.

    2015-01-01

    Objectives Clinical management of phonotraumatic vocal fold lesions (nodules, polyps) is based largely on assumptions that abnormalities in habitual levels of sound pressure level (SPL), fundamental frequency (f0), and/or amount of voice use play a major role in lesion development and chronic persistence. This study used ambulatory voice monitoring to evaluate if significant differences in voice use exist between patients with phonotraumatic lesions and normal matched controls. Methods Subjects were 70 adult females: 35 with vocal fold nodules or polyps and 35 age-, sex-, and occupation-matched normal individuals. Weeklong summary statistics of voice use were computed from anterior neck surface acceleration recorded using a smartphone-based ambulatory voice monitor. Results Paired t-tests and Kolmogorov-Smirnov tests resulted in no statistically significant differences between patients and matched controls regarding average measures of SPL, f0, vocal dose measures, and voicing/voice rest periods. Paired t-tests comparing f0 variability between the groups resulted in statistically significant differences with moderate effect sizes. Conclusions Individuals with phonotraumatic lesions did not exhibit differences in average ambulatory measures of vocal behavior when compared with matched controls. More refined characterizations of underlying phonatory mechanisms and other potentially contributing causes are warranted to better understand risk factors associated with phonotraumatic lesions. PMID:26024911

  8. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  9. Determination of sound types and source levels of airborne vocalizations by California sea lions, Zalophus californianus, in rehabilitation at the Marine Mammal Center in Sausalito, California

    NASA Astrophysics Data System (ADS)

    Schwalm, Afton Leigh

    California sea lions (Zalophus californianus) are a highly popular and easily recognized marine mammal in zoos, aquariums, circuses, and often seen by ocean visitors. They are highly vocal and gregarious on land. Surprisingly, little research has been performed on the vocalization types, source levels, acoustic properties, and functions of airborne sounds used by California sea lions. This research on airborne vocalizations of California sea lions will advance the understanding of this aspect of California sea lions communication, as well as examine the relationship between health condition and acoustic behavior. Using a PhillipsRTM digital recorder with attached microphone and a calibrated RadioShackRTM sound pressure level meter, acoustical data were recorded opportunistically on California sea lions during rehabilitation at The Marine Mammal Center in Sausalito, CA. Vocalizations were analyzed using frequency, time, and amplitude variables with Raven Pro: Interactive Sound Analysis Software Version 1.4 (The Cornell Lab of Ornithology, Ithaca, NY). Five frequency, three time, and four amplitude variables were analyzed for each vocalization. Differences in frequency, time, and amplitude variables were not significant by sex. The older California sea lion group produced vocalizations that were significantly lower in four frequency variables, significantly longer in two time variables, significantly higher in calibrated maximum and minimum amplitude variables, and significantly lower in frequency at maximum and minimum amplitude compared with pups. Six call types were identified: bark, goat, growl/grumble, bark/grumble, bark/growl, and grumble/moan. The growl/grumble call was higher in dominant beginning, ending, and minimum frequency, as well as in the frequency at maximum amplitude compared with the bark, goat, bark/grumble calls in the first versus last vocalization sample. The goat call was significantly higher in first harmonic interval than any other call type

  10. Method of sound synthesis

    DOEpatents

    Miner, Nadine E.; Caudell, Thomas P.

    2004-06-08

    A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.

  11. Prospective cohort study on noise levels in a pediatric cardiac intensive care unit.

    PubMed

    Garcia Guerra, Gonzalo; Joffe, Ari R; Sheppard, Cathy; Pugh, Jodie; Moez, Elham Khodayari; Dinu, Irina A; Jou, Hsing; Hartling, Lisa; Vohra, Sunita

    2018-04-01

    To describe noise levels in a pediatric cardiac intensive care unit, and to determine the relationship between sound levels and patient sedation requirements. Prospective observational study at a pediatric cardiac intensive care unit (PCICU). Sound levels were measured continuously in slow A weighted decibels dB(A) with a sound level meter SoundEarPro® during a 4-week period. Sedation requirement was assessed using the number of intermittent (PRNs) doses given per hour. Analysis was conducted with autoregressive moving average models and the Granger test for causality. 39 children were included in the study. The average (SD) sound level in the open area was 59.4 (2.5) dB(A) with a statistically significant but clinically unimportant difference between day/night hours (60.1 vs. 58.6; p-value < 0.001). There was no significant difference between sound levels in the open area/single room (59.4 vs. 60.8, p-value = 0.108). Peak noise levels were > 90 dB. There was a significant association between average (p-value = 0.030) and peak sound levels (p-value = 0.006), and number of sedation PRNs. Sound levels were above the recommended values with no differences between day/night or open area/single room. High sound levels were significantly associated with sedation requirements. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Integrated Advanced Microwave Sounding Unit-A (AMSU-A). Engineering Test Report: AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Tests of Dec 1999/Jan 2000 (S/O 784077, OC-454)

    NASA Technical Reports Server (NTRS)

    Heffner, R.

    2000-01-01

    This is the Engineering Test Report, AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Test of Dec 1999/Jan 2000 (S/O 784077, OC-454), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).

  13. [Respiratory sounds].

    PubMed

    Marquet, P

    1995-01-01

    After having invented the stethoscope, Laennec published his treatise on auscultation in 1819, describing the acoustic events generated by ventilation and linking them with anatomopathological findings. The weak points of his semiology lay in its subjective and interpretative character, expressed by an imprecise and picturesque nomenclature. Technical studies of breath sounds began in the middle of the twentieth century, and this enabled the American Thoracic Society to elaborate a new classification of adventitious noises based on a few physical characteristics. This terminology replaced that of Laennec or his translators (except in France). The waveforms of the different normal and adventitious noises have been well described. However, only the study of the time evolution of their tone (frequency-amplitude-time relationship) will enable a complete analysis of these phenomena. This approach has been undertaken by a few teams but much remains to be done, in particular in relation to discontinuous noises (crackles). Technology development raises hope for the design, in near future, of automatic processes for respiratory noise detection and classification. Systematic research into the production mechanisms and sites of these noises has progressed equally. It should, in time, reinforce their semiological value and give to auscultation, either instrumental or using the stethoscope or instrumentally, an increased diagnostic power and the status of respiratory function test.

  14. Sound field separation with sound pressure and particle velocity measurements.

    PubMed

    Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-12-01

    In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.

  15. Comparisons of auditory brainstem response and sound level tolerance in tinnitus ears and non-tinnitus ears in unilateral tinnitus patients with normal audiograms.

    PubMed

    Shim, Hyun Joon; An, Yong-Hwi; Kim, Dong Hyun; Yoon, Ji Eun; Yoon, Ji Hyang

    2017-01-01

    Recently, "hidden hearing loss" with cochlear synaptopathy has been suggested as a potential pathophysiology of tinnitus in individuals with a normal hearing threshold. Several studies have demonstrated that subjects with tinnitus and normal audiograms show significantly reduced auditory brainstem response (ABR) wave I amplitudes compared with control subjects, but normal wave V amplitudes, suggesting increased central auditory gain. We aimed to reconfirm the "hidden hearing loss" theory through a within-subject comparison of wave I and wave V amplitudes and uncomfortable loudness level (UCL), which might be decreased with increased central gain, in tinnitus ears (TEs) and non-tinnitus ears (NTEs). Human subjects included 43 unilateral tinnitus patients (19 males, 24 females) with normal and symmetric hearing thresholds and 18 control subjects with normal audiograms. The amplitudes of wave I and V from the peak to the following trough were measured twice at 90 dB nHL and we separately assessed UCLs at 500 Hz and 3000 Hz pure tones in each TE and NTE. The within-subject comparison between TEs and NTEs showed no significant differences in wave I and wave V amplitude, or wave V/I ratio in both the male and female groups. Individual data revealed increased V/I amplitude ratios > mean + 2 SD in 3 TEs, but not in any control ears. We found no significant differences in UCL at 500 Hz or 3000 Hz between the TEs and NTEs, but the UCLs of both TEs and NTEs were lower than those of the control ears. Our ABR data do not represent meaningful evidence supporting the hypothesis of cochlear synaptopathy with increased central gain in tinnitus subjects with normal audiograms. However, reduced sound level tolerance in both TEs and NTEs might reflect increased central gain consequent on hidden synaptopathy that was subsequently balanced between the ears by lateral olivocochlear efferents.

  16. Comparisons of auditory brainstem response and sound level tolerance in tinnitus ears and non-tinnitus ears in unilateral tinnitus patients with normal audiograms

    PubMed Central

    An, Yong-Hwi; Kim, Dong Hyun; Yoon, Ji Eun; Yoon, Ji Hyang

    2017-01-01

    Objective Recently, “hidden hearing loss” with cochlear synaptopathy has been suggested as a potential pathophysiology of tinnitus in individuals with a normal hearing threshold. Several studies have demonstrated that subjects with tinnitus and normal audiograms show significantly reduced auditory brainstem response (ABR) wave I amplitudes compared with control subjects, but normal wave V amplitudes, suggesting increased central auditory gain. We aimed to reconfirm the “hidden hearing loss” theory through a within-subject comparison of wave I and wave V amplitudes and uncomfortable loudness level (UCL), which might be decreased with increased central gain, in tinnitus ears (TEs) and non-tinnitus ears (NTEs). Subjects and methods Human subjects included 43 unilateral tinnitus patients (19 males, 24 females) with normal and symmetric hearing thresholds and 18 control subjects with normal audiograms. The amplitudes of wave I and V from the peak to the following trough were measured twice at 90 dB nHL and we separately assessed UCLs at 500 Hz and 3000 Hz pure tones in each TE and NTE. Results The within-subject comparison between TEs and NTEs showed no significant differences in wave I and wave V amplitude, or wave V/I ratio in both the male and female groups. Individual data revealed increased V/I amplitude ratios > mean + 2 SD in 3 TEs, but not in any control ears. We found no significant differences in UCL at 500 Hz or 3000 Hz between the TEs and NTEs, but the UCLs of both TEs and NTEs were lower than those of the control ears. Conclusions Our ABR data do not represent meaningful evidence supporting the hypothesis of cochlear synaptopathy with increased central gain in tinnitus subjects with normal audiograms. However, reduced sound level tolerance in both TEs and NTEs might reflect increased central gain consequent on hidden synaptopathy that was subsequently balanced between the ears by lateral olivocochlear efferents. PMID:29253030

  17. Evaluating The Relation of Trace Fracture Inclination and Sound Pressure Level and Time-of-flight QUS Parameters Using Computational Simulation

    NASA Astrophysics Data System (ADS)

    Rosa, P. T.; Fontes-Pereira, A. J.; Matusin, D. P.; von Krüger, M. A.; Pereira, W. C. A.

    Bone healing is a complex process that stars after the occurrence of a fracture to restore bone optimal conditions. The gold standards for bone status evaluation are the dual energy X-ray absorptiometry and the computerized tomography. Ultrasound-based technologies have some advantages as compared to X-ray technologies: nonionizing radiation, portability and lower cost among others. Quantitative ultrasound (QUS) has been proposed in literature as a new tool to follow up the fracture healing process. QUS relates the ultrasound propagation with the bone tissue condition (normal or pathological), so, a change in wave propagation may indicate a variation in tissue properties. The most used QUS parameters are time-of-flight (TOF) and sound pressure level (SPL) of the first arriving signal (FAS). In this work, the FAS is the well known lateral wave. The aim of this work is to evaluate the relation of the TOF and SPL of the FAS and fracture inclination trace in two stages of bone healing using computational simulations. Four fracture geometries were used: normal and oblique with 30, 45 and 60 degrees. The TOF average values were 63.23 μs, 63.14 μs, 63.03 μs 62.94 μs for normal, 30, 45 and 60 degrees respectively and average SPL values were -3.83 dB -4.32 dB, -4.78 dB, -6.19 dB for normal, 30, 45 and 60 degrees respectively. The results show an inverse pattern between the amplitude and time-of-flight. These values seem to be sensible to fracture inclination trace, and in future, can be used to characterize it.

  18. AVE/VAS 4: 25-mb sounding data

    NASA Technical Reports Server (NTRS)

    Sienkiewicz, M. E.

    1983-01-01

    The rawinsonde sounding program is described and tabulated data at 25 mb intervals for the 24 stations and 14 special stations participating in the experiment is presented. Sounding were taken at 3 hr intervals. An additional sounding was taken at the normal synoptic observation time. Some soundings were computed from raw ordinate data, while others were interpolated from significant level data.

  19. Making Sound Connections

    ERIC Educational Resources Information Center

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  20. The Sound of Science

    ERIC Educational Resources Information Center

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  1. Sounds Exaggerate Visual Shape

    ERIC Educational Resources Information Center

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  2. Improvement of intensive care unit sound environment and analyses of consequences on sleep: an experimental study.

    PubMed

    Persson Waye, Kerstin; Elmenhorst, Eva-Maria; Croy, Ilona; Pedersen, Eja

    2013-12-01

    Uninterrupted sleep is of vital importance for restoration and regaining health. In intensive care units (ICUs) where recovering and healing is crucial, patients' sleep often is fragmented and disturbed due to noise from activities from oneself, other patients, and alarms. The aim of our study was to explore if sleep could be improved by modifying the sound environment in a way that is practically feasible in ICUs. We studied the effects of originally recorded ICU noise and peak reduced ICU noise on sleep in healthy male participants. Sleep was registered with polysomnography (PSG) during four nights: one adaptation night, one reference (REF) night, and the two exposed nights with similar equivalent sound levels (47dB LAeq) but different maximum sound levels (56- vs 64-dB LAFmax). The participants answered questionnaires and saliva cortisol was sampled in the morning. During ICU exposure nights, sleep was more fragmented with less slow-wave sleep (SWS), more arousals, and more time awake. The effects of reduced maximum sound level were minor. The subjective data supported the polysomnographic findings, though cortisol levels were not significantly affected by the exposure conditions. Noise in ICUs impairs sleep and the reduction of maximal A-weighted levels from 64 to 56dB is not enough to have a clear improved effect on sleep quality. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  4. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  5. Early sound symbolism for vowel sounds.

    PubMed

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound-shape mapping. In this study, we investigated the influence of vowels on sound-shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded-jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  6. Sound wave transmission (image)

    MedlinePlus

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  7. Study of Noise-Certification Standards for Aircraft Engines. Volume 2. Procedures for Measuring Far Field Sound Pressure Levels around an Outdoor Jet-Engine Test Stand.

    DTIC Science & Technology

    1983-06-01

    60 References ........................................................... 79 AccesSqlon For NTIS rFA&I r"!’ TAU U: .,P Dist r A. -. S iv...separate exhaust nozzles for discharge of fan and turbine exhaust flows (e.g., JT15D, TFE731 , ALF-502, CF34, JT3D, CFM56, RB.211, CF6, JT9D, and PW2037...minimum radial distance from the effective source of sound at 40 Hz should then be approxi- mately 69 m. At 60 Hz, the minimum radial distance should be

  8. Sound Stories for General Music

    ERIC Educational Resources Information Center

    Cardany, Audrey Berger

    2013-01-01

    Language and music literacy share a similar process of understanding that progresses from sensory experience to symbolic representation. The author identifies Bruner’s modes of understanding as they relate to using narrative in the music classroom to enhance music reading at iconic and symbolic levels. Two sound stories are included for…

  9. 33 CFR 67.10-40 - Sound signals authorized for use prior to January 1, 1973.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., and 67.10-10, if the sound signal has a minimum sound pressure level as specified in Table A of... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals authorized for use... STRUCTURES General Requirements for Sound signals § 67.10-40 Sound signals authorized for use prior to...

  10. Priming Gestures with Sounds

    PubMed Central

    Lemaitre, Guillaume; Heller, Laurie M.; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas

    2015-01-01

    We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge. PMID:26544884

  11. Brief report: sound output of infant humidifiers.

    PubMed

    Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T

    2015-06-01

    The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  12. Tuning the cognitive environment: Sound masking with 'natural' sounds in open-plan offices

    NASA Astrophysics Data System (ADS)

    DeLoach, Alana

    With the gain in popularity of open-plan office design and the engineering efforts to achieve acoustical comfort for building occupants, a majority of workers still report dissatisfaction in their workplace environment. Office acoustics influence organizational effectiveness, efficiency, and satisfaction through meeting appropriate requirements for speech privacy and ambient sound levels. Implementing a sound masking system is one tried-and-true method of achieving privacy goals. Although each sound masking system is tuned for its specific environment, the signal -- random steady state electronic noise, has remained the same for decades. This research work explores how `natural' sounds may be used as an alternative to this standard masking signal employed so ubiquitously in sound masking systems in the contemporary office environment. As an unobtrusive background sound, possessing the appropriate spectral characteristics, this proposed use of `natural' sounds for masking challenges the convention that masking sounds should be as meaningless as possible. Through the pilot study presented in this work, we hypothesize that `natural' sounds as sound maskers will be as effective at masking distracting background noise as the conventional masking sound, will enhance cognitive functioning, and increase participant (worker) satisfaction.

  13. Analysis of environmental sounds

    NASA Astrophysics Data System (ADS)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  14. Meteorological effects on long-range outdoor sound propagation

    NASA Technical Reports Server (NTRS)

    Klug, Helmut

    1990-01-01

    Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.

  15. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation

    PubMed Central

    Salomons, Erik M.; Lohman, Walter J. A.; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631

  16. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    PubMed

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  17. Radiated BPF sound measurement of centrifugal compressor

    NASA Astrophysics Data System (ADS)

    Ohuchida, S.; Tanaka, K.

    2013-12-01

    A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.

  18. Sounds like Team Spirit

    NASA Technical Reports Server (NTRS)

    Hoffman, Edward

    2002-01-01

    trying to improve on what they've done before. Second, success in any endeavor stems from people who know how to interpret a composition to sound beautiful when played in a different style. For Knowledge Sharing to work, it must be adapted, reinterpreted, shaped and played with at the centers. In this regard, we've been blessed with another crazy, passionate, inspired artist named Claire Smith. Claire has turned Ames Research Center in California into APPL-west. She is so good and committed to what she does that I just refer people to her whenever they have questions about implementing project management development at the field level. Finally, any great effort requires talented people working behind the scenes, the people who formulate a business approach and know how to manage the money so that the music gets heard. I have known many brilliant and creative people with a ton of ideas that never take off due to an inability to work the business. Again, the Knowledge Sharing team has been fortunate to have competent and passionate people, specifically Tony Maturo and his procurement team at Goddard Space Flight Center, to make sure the process is in place to support the effort. This kind of support is every bit as crucial as the activity itself, and the efforts and creativity that go into successful procurement and contracting is a vital ingredient of this successful team.

  19. Visualizing Sound: Demonstrations to Teach Acoustic Concepts

    NASA Astrophysics Data System (ADS)

    Rennoll, Valerie

    Interference, a phenomenon in which two sound waves superpose to form a resultant wave of greater or lower amplitude, is a key concept when learning about the physics of sound waves. Typical interference demonstrations involve students listening for changes in sound level as they move throughout a room. Here, new tools are developed to teach this concept that provide a visual component, allowing individuals to see changes in sound level on a light display. This is accomplished using a microcontroller that analyzes sound levels collected by a microphone and displays the sound level in real-time on an LED strip. The light display is placed on a sliding rail between two speakers to show the interference occurring between two sound waves. When a long-exposure photograph is taken of the light display being slid from one end of the rail to the other, a wave of the interference pattern can be captured. By providing a visual component, these tools will help students and the general public to better understand interference, a key concept in acoustics.

  20. Decadal trends in Indian Ocean ambient sound.

    PubMed

    Miksis-Olds, Jennifer L; Bradley, David L; Niu, Xiaoyue Maggie

    2013-11-01

    The increase of ocean noise documented in the North Pacific has sparked concern on whether the observed increases are a global or regional phenomenon. This work provides evidence of low frequency sound increases in the Indian Ocean. A decade (2002-2012) of recordings made off the island of Diego Garcia, UK in the Indian Ocean was parsed into time series according to frequency band and sound level. Quarterly sound level comparisons between the first and last years were also performed. The combination of time series and temporal comparison analyses over multiple measurement parameters produced results beyond those obtainable from a single parameter analysis. The ocean sound floor has increased over the past decade in the Indian Ocean. Increases were most prominent in recordings made south of Diego Garcia in the 85-105 Hz band. The highest sound level trends differed between the two sides of the island; the highest sound levels decreased in the north and increased in the south. Rate, direction, and magnitude of changes among the multiple parameters supported interpretation of source functions driving the trends. The observed sound floor increases are consistent with concurrent increases in shipping, wind speed, wave height, and blue whale abundance in the Indian Ocean.

  1. Early sound symbolism for vowel sounds

    PubMed Central

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape. PMID:24349684

  2. Sound Exposure of Healthcare Professionals Working with a University Marching Band.

    PubMed

    Russell, Jeffrey A; Yamaguchi, Moegi

    2018-01-01

    Music-induced hearing disorders are known to result from exposure to excessive levels of music of different genres. Marching band music, with its heavy emphasis on brass and percussion, is one type that is a likely contributor to music-induced hearing disorders, although specific data on sound pressure levels of marching bands have not been widely studied. Furthermore, if marching band music does lead to music-induced hearing disorders, the musicians may not be the only individuals at risk. Support personnel such as directors, equipment managers, and performing arts healthcare providers may also be exposed to potentially damaging sound pressures. Thus, we sought to explore to what degree healthcare providers receive sound dosages above recommended limits during their work with a marching band. The purpose of this study was to determine the sound exposure of healthcare professionals (specifically, athletic trainers [ATs]) who provide on-site care to a large, well-known university marching band. We hypothesized that sound pressure levels to which these individuals were exposed would exceed the National Institute for Occupational Safety and Health (NIOSH) daily percentage allowance. Descriptive observational study. Eight ATs working with a well-known American university marching band volunteered to wear noise dosimeters. During the marching band season, ATs wore an Etymotic ER-200D dosimeter whenever working with the band at outdoor rehearsals, indoor field house rehearsals, and outdoor performances. The dosimeters recorded dose percent exposure, equivalent continuous sound levels in A-weighted decibels, and duration of exposure. For comparison, a dosimeter also was worn by an AT working in the university's performing arts medicine clinic. Participants did not alter their typical duties during any data collection sessions. Sound data were collected with the dosimeters set at the NIOSH standards of 85 dBA threshold and 3 dBA exchange rate; the NIOSH 100% daily dose is

  3. The sounds of handheld audio players.

    PubMed

    Rudy, Susan F

    2007-01-01

    Hearing experts and public health organizations have longstanding hearing safety concerns about personal handheld audio devices, which are growing in both number and popularity. This paper reviews the maximum sound levels of handheld compact disc players, MP3 players, and an iPod. It further reviews device factors that influence the sound levels produced by these audio devices and ways to reduce the risk to hearing during their use.

  4. Computational study of the interaction between a shock and a near-wall vortex using a weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Zuo, Zhifeng; Maekawa, Hiroshi

    2014-02-01

    The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.

  5. The Bosstown Sound.

    ERIC Educational Resources Information Center

    Burns, Gary

    Based on the argument that (contrary to critical opinion) the musicians in the various bands associated with Bosstown Sound were indeed talented, cohesive individuals and that the bands' lack of renown was partially a result of ill-treatment by record companies and the press, this paper traces the development of the Bosstown Sound from its…

  6. The sounds of nanotechnology

    NASA Astrophysics Data System (ADS)

    Campbell, Norah; Deane, Cormac; Murphy, Padraig

    2017-07-01

    Public perceptions of nanotechnology are shaped by sound in surprising ways. Our analysis of the audiovisual techniques employed by nanotechnology stakeholders shows that well-chosen sounds can help to win public trust, create value and convey the weird reality of objects on the nanoscale.

  7. Breaking the Sound Barrier

    ERIC Educational Resources Information Center

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  8. Exploring Noise: Sound Pollution.

    ERIC Educational Resources Information Center

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  9. The sound manifesto

    NASA Astrophysics Data System (ADS)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  10. Respiratory Sound Analysis for Flow Estimation During Wakefulness and Sleep, and its Applications for Sleep Apnea Detection and Monitoring

    NASA Astrophysics Data System (ADS)

    Yadollahi, Azadeh

    's oxygen saturation level (SaO2) data. It automatically classifies the sound segments into breath, snore and noise. A weighted average of features extracted from sound segments and SaO2 signal was used to detect apnea and hypopnea events. The performance of the proposed approach was evaluated on the data of 66 patients. The results show high correlation (0.96, p < 0.0001) between the outcomes of our system and those of the polysomnography. Also, sensitivity and specificity of the proposed method in differentiating simple snorers from OSA patients were found to be more than 91%. These results are superior or comparable with the existing commercialized sleep apnea portable monitors.

  11. Photoacoustic sounds from meteors

    DOE PAGES

    Spalding, Richard; Tencer, John; Sweatt, William; ...

    2017-02-01

    Concurrent sound associated with very bright meteors manifests as popping, hissing, and faint rustling sounds occurring simultaneously with the arrival of light from meteors. Numerous instances have been documented with –11 to –13 brightness. These sounds cannot be attributed to direct acoustic propagation from the upper atmosphere for which travel time would be several minutes. Concurrent sounds must be associated with some form of electromagnetic energy generated by the meteor, propagated to the vicinity of the observer, and transduced into acoustic waves. Previously, energy propagated from meteors was assumed to be RF emissions. This has not been well validated experimentally.more » Herein we describe experimental results and numerical models in support of photoacoustic coupling as the mechanism. Recent photometric measurements of fireballs reveal strong millisecond flares and significant brightness oscillations at frequencies ≥40 Hz. Strongly modulated light at these frequencies with sufficient intensity can create concurrent sounds through radiative heating of common dielectric materials like hair, clothing, and leaves. This heating produces small pressure oscillations in the air contacting the absorbers. Calculations show that –12 brightness meteors can generate audible sound at ~25 dB SPL. As a result, the photoacoustic hypothesis provides an alternative explanation for this longstanding mystery about generation of concurrent sounds by fireballs.« less

  12. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  13. Emergent categorical representation of natural, complex sounds resulting from the early post-natal sound environment

    PubMed Central

    Bao, Shaowen; Chang, Edward F.; Teng, Ching-Ling; Heiser, Marc A.; Merzenich, Michael M.

    2013-01-01

    Cortical sensory representations can be reorganized by sensory exposure in an epoch of early development. The adaptive role of this type of plasticity for natural sounds in sensory development is, however, unclear. We have reared rats in a naturalistic, complex acoustic environment and examined their auditory representations. We found that cortical neurons became more selective to spectrotemporal features in the experienced sounds. At the neuronal population level, more neurons were involved in representing the whole set of complex sounds, but fewer neurons actually responded to each individual sound, but with greater magnitudes. A comparison of population-temporal responses to the experienced complex sounds revealed that cortical responses to different renderings of the same song motif were more similar, indicating that the cortical neurons became less sensitive to natural acoustic variations associated with stimulus context and sound renderings. By contrast, cortical responses to sounds of different motifs became more distinctive, suggesting that cortical neurons were tuned to the defining features of the experienced sounds. These effects lead to emergent “categorical” representations of the experienced sounds, which presumably facilitate their recognition. PMID:23747304

  14. Sound as artifact

    NASA Astrophysics Data System (ADS)

    Benjamin, Jeffrey L.

    A distinguishing feature of the discipline of archaeology is its reliance upon sensory dependant investigation. As perceived by all of the senses, the felt environment is a unique area of archaeological knowledge. It is generally accepted that the emergence of industrial processes in the recent past has been accompanied by unprecedented sonic extremes. The work of environmental historians has provided ample evidence that the introduction of much of this unwanted sound, or "noise" was an area of contestation. More recent research in the history of sound has called for more nuanced distinctions than the noisy/quiet dichotomy. Acoustic archaeology tends to focus upon a reconstruction of sound producing instruments and spaces with a primary goal of ascertaining intentionality. Most archaeoacoustic research is focused on learning more about the sonic world of people within prehistoric timeframes while some research has been done on historic sites. In this thesis, by way of a meditation on industrial sound and the physical remains of the Quincy Mining Company blacksmith shop (Hancock, MI) in particular, I argue for an acceptance and inclusion of sound as artifact in and of itself. I am introducing the concept of an individual sound-form, or sonifact , as a reproducible, repeatable, representable physical entity, created by tangible, perhaps even visible, host-artifacts. A sonifact is a sound that endures through time, with negligible variability. Through the piecing together of historical and archaeological evidence, in this thesis I present a plausible sonifactual assemblage at the blacksmith shop in April 1916 as it may have been experienced by an individual traversing the vicinity on foot: an 'historic soundwalk.' The sensory apprehension of abandoned industrial sites is multi-faceted. In this thesis I hope to make the case for an acceptance of sound as a primary heritage value when thinking about the industrial past, and also for an increased awareness and acceptance

  15. GPS Sounding Rocket Developments

    NASA Technical Reports Server (NTRS)

    Bull, Barton

    1999-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data including; chemical makeup and physical processes taking place in the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft prior to their use in more expensive activities. This paper addresses the NASA Wallops Island history of GPS Sounding Rocket experience since 1994 and the development of highly accurate and useful system.

  16. The warm, rich sound of valve guitar amplifiers

    NASA Astrophysics Data System (ADS)

    Keeports, David

    2017-03-01

    Practical solid state diodes and transistors have made glass valve technology nearly obsolete. Nevertheless, valves survive largely because electric guitar players much prefer the sound of valve amplifiers to the sound of transistor amplifiers. This paper discusses the introductory-level physics behind that preference. Overdriving an amplifier adds harmonics to an input sound. While a moderately overdriven valve amplifier produces strong even harmonics that enhance a sound, an overdriven transistor amplifier creates strong odd harmonics that can cause dissonance. The functioning of a triode valve explains its creation of even and odd harmonics. Music production software enables the examination of both the wave shape and the harmonic content of amplified sounds.

  17. Sound Visualization and Holography

    ERIC Educational Resources Information Center

    Kock, Winston E.

    1975-01-01

    Describes liquid surface holograms including their application to medicine. Discusses interference and diffraction phenomena using sound wave scanning techniques. Compares focussing by zone plate to holographic image development. (GH)

  18. Graphene-on-paper sound source devices.

    PubMed

    Tian, He; Ren, Tian-Ling; Xie, Dan; Wang, Yu-Feng; Zhou, Chang-Jian; Feng, Ting-Ting; Fu, Di; Yang, Yi; Peng, Ping-Gang; Wang, Li-Gang; Liu, Li-Tian

    2011-06-28

    We demonstrate an interesting phenomenon that graphene can emit sound. The application of graphene can be expanded in the acoustic field. Graphene-on-paper sound source devices are made by patterning graphene on paper substrates. Three graphene sheet samples with the thickness of 100, 60, and 20 nm were fabricated. Sound emission from graphene is measured as a function of power, distance, angle, and frequency in the far-field. The theoretical model of air/graphene/paper/PCB board multilayer structure is established to analyze the sound directivity, frequency response, and efficiency. Measured sound pressure level (SPL) and efficiency are in good agreement with theoretical results. It is found that graphene has a significant flat frequency response in the wide ultrasound range 20-50 kHz. In addition, the thinner graphene sheets can produce higher SPL due to its lower heat capacity per unit area (HCPUA). The infrared thermal images reveal that a thermoacoustic effect is the working principle. We find that the sound performance mainly depends on the HCPUA of the conductor and the thermal properties of the substrate. The paper-based graphene sound source devices have highly reliable, flexible, no mechanical vibration, simple structure and high performance characteristics. It could open wide applications in multimedia, consumer electronics, biological, medical, and many other areas.

  19. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  20. Assessment and improvement of sound quality in cochlear implant users

    PubMed Central

    Caldwell, Meredith T.; Jiam, Nicole T.

    2017-01-01

    Objectives Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Data Sources Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Results Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant‐mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI‐MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. Conclusions In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. Level of Evidence NA PMID:28894831

  1. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies

  2. Technology, Sound and Popular Music.

    ERIC Educational Resources Information Center

    Jones, Steve

    The ability to record sound is power over sound. Musicians, producers, recording engineers, and the popular music audience often refer to the sound of a recording as something distinct from the music it contains. Popular music is primarily mediated via electronics, via sound, and not by means of written notes. The ability to preserve or modify…

  3. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    NASA Astrophysics Data System (ADS)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  4. Experiments to investigate the acoustic properties of sound propagation

    NASA Astrophysics Data System (ADS)

    Dagdeviren, Omur E.

    2018-07-01

    Propagation of sound waves is one of the fundamental concepts in physics. Some of the properties of sound propagation such as attenuation of sound intensity with increasing distance are familiar to everybody from the experiences of daily life. However, the frequency dependence of sound propagation and the effect of acoustics in confined environments are not straightforward to estimate. In this article, we propose experiments, which can be conducted in a classroom environment with commonly available devices such as smartphones and laptops to measure sound intensity level as a function of the distance between the source and the observer and frequency of the sound. Our experiments and deviations from the theoretical calculations can be used to explain basic concepts of sound propagation and acoustics to a diverse population of students.

  5. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  6. Sequence stratigraphy of the ANDRILL Southern McMurdo Sound (SMS) project drillcore, Antarctica: an expanded, near-field record of Antarctic Early to Middle Miocene climate and relative sea-level change

    NASA Astrophysics Data System (ADS)

    Fielding, C. R.; Browne, G. H.; Field, B.; Florindo, F.; Harwood, D. M.; Krissek, L. A.; Levy, R. H.; Panter, K.; Passchier, S.; Pekar, S. F.; SMS Science Team

    2008-12-01

    Present understanding of Antarctic climate change during the Early to Middle Miocene, including definition of major cycles of glacial expansion and contraction, relies in large part on stable isotope proxy records from Ocean Drilling Program cores. Here, we present a sequence stratigraphic analysis of the Southern McMurdo Sound drillcore (AND-2A), which was acquired during the Austral Spring of 2007. This core offers a hitherto unavailable ice-proximal stratigraphic archive of the Early to Middle Miocene from a high-accommodation Antarctic continental margin setting, and provides clear evidence of repeated fluctuations in climate, ice expansion/contraction and attendant sea-level change over the period 20-14 Ma, with a more fragmentary record of the post-14 Ma period. A succession of seventy sequences is recognized, each bounded by a significant facies dislocation (sequence boundary), composed internally of deposits of glacimarine to open shallow marine environments, and each typically dominated by the transgressive systems tract. From changes in facies abundances and sequence character, a series of long-term (m.y.) changes in climate and relative sea-level is identified. The lithostratigraphy can be correlated confidently to glacial events Mi1b and Mi2, to the Miocene Climatic Optimum, and to the global eustatic sea-level curve. SMS provides a detailed, direct, ice-proximal reference point from which to evaluate stable isotope proxy records for Neogene Antarctic paleoclimate.

  7. Nearshore Birds in Puget Sound

    DTIC Science & Technology

    2006-05-01

    Published by Seattle District, U.S. Army Corps of Engineers, Seattle, Washington. Kriete, B. 2007. Orcas in Puget Sound . Puget Sound Near- shore...Technical Report 2006-05 Puget Sound Nearshore Partnership I Nearshore Birds in Puget Sound Prepared in...support of the Puget Sound Nearshore Partnership Joseph B. Buchanan Washington Department of Fish and Wildlife Technical Report 2006-05 ii

  8. Meteor fireball sounds identified

    NASA Technical Reports Server (NTRS)

    Keay, Colin

    1992-01-01

    Sounds heard simultaneously with the flight of large meteor fireballs are electrical in origin. Confirmation that Extra/Very Low Frequency (ELF/VLF) electromagnetic radiation is produced by the fireball was obtained by Japanese researchers. Although the generation mechanism is not fully understood, studies of the Meteorite Observation and Recovery Project (MORP) and other fireball data indicate that interaction with the atmosphere is definitely responsible and the cut-off magnitude of -9 found for sustained electrophonic sounds is supported by theory. Brief bursts of ELF/VLF radiation may accompany flares or explosions of smaller fireballs, producing transient sounds near favorably placed observers. Laboratory studies show that mundane physical objects can respond to electrical excitation and produce audible sounds. Reports of electrophonic sounds should no longer be discarded. A catalog of over 300 reports relating to electrophonic phenomena associated with meteor fireballs, aurorae, and lightning was assembled. Many other reports have been cataloged in Russian. These may assist the full solution of the similar long-standing and contentious mystery of audible auroral displays.

  9. GPS Sounding Rocket Developments

    NASA Technical Reports Server (NTRS)

    Bull, Barton

    1999-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data including; chemical makeup and physical processes taking place In the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft prior to their use in more expensive activities. The NASA Sounding Rocket Program is managed by personnel from Goddard Space Flight Center Wallops Flight Facility (GSFC/WFF) in Virginia. Typically around thirty of these rockets are launched each year, either from established ranges at Wallops Island, Virginia, Poker Flat Research Range, Alaska; White Sands Missile Range, New Mexico or from Canada, Norway and Sweden. Many times launches are conducted from temporary launch ranges in remote parts of the world requi6ng considerable expense to transport and operate tracking radars. An inverse differential GPS system has been developed for Sounding Rocket. This paper addresses the NASA Wallops Island history of GPS Sounding Rocket experience since 1994 and the development of a high accurate and useful system.

  10. Neonatal incubators: a toxic sound environment for the preterm infant?*.

    PubMed

    Marik, Paul E; Fuller, Christopher; Levitov, Alexander; Moll, Elizabeth

    2012-11-01

    High sound pressure levels may be harmful to the maturing newborn. Current guidelines suggest that the sound pressure levels within a neonatal intensive care unit should not exceed 45 dB(A). It is likely that environmental noise as well as the noise generated by the incubator fan and respiratory equipment may contribute to the total sound pressure levels. Knowledge of the contribution of each component and source is important to develop effective strategies to reduce noise within the incubator. The objectives of this study were to determine the sound levels, sound spectra, and major sources of sound within a modern neonatal incubator (Giraffe Omnibed; GE Healthcare, Helsinki, Finland) using a sound simulation study to replicate the conditions of a preterm infant undergoing high-frequency jet ventilation (Life Pulse, Bunnell, UT). Using advanced sound data acquisition and signal processing equipment, we measured and analyzed the sound level at a dummy infant's ear and at the head level outside the enclosure. The sound data time histories were digitally acquired and processed using a digital Fast Fourier Transform algorithm to provide spectra of the sound and cumulative sound pressure levels (dBA). The simulation was done with the incubator cooling fan and ventilator switched on or off. In addition, tests were carried out with the enclosure sides closed and hood down and then with the enclosure sides open and the hood up to determine the importance of interior incubator reverberance on the interior sound levels With all the equipment off and the hood down, the sound pressure levels were 53 dB(A) inside the incubator. The sound pressure levels increased to 68 dB(A) with all equipment switched on (approximately 10 times louder than recommended). The sound intensity was 6.0 × 10(-8) watts/m(2); this sound level is roughly comparable with that generated by a kitchen exhaust fan on high. Turning the ventilator off reduced the overall sound pressure levels to 64 dB(A) and

  11. How learning to abstract shapes neural sound representations

    PubMed Central

    Ley, Anke; Vroomen, Jean; Formisano, Elia

    2014-01-01

    The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations. PMID:24917783

  12. A weighted reliability measure for phonetic transcription.

    PubMed

    Oller, D Kimbrough; Ramsdell, Heather L

    2006-12-01

    The purpose of the present work is to describe and illustrate the utility of a new tool for assessment of transcription agreement. Traditional measures have not characterized overall transcription agreement with sufficient resolution, specifically because they have often treated all phonetic differences between segments in transcriptions as equivalent, thus constituting an unweighted approach to agreement assessment. The measure the authors have developed calculates a weighted transcription agreement value based on principles derived from widely accepted tenets of phonological theory. To investigate the utility of the new measure, 8 coders transcribed samples of speech and infant vocalizations. Comparing the transcriptions through a computer-based implementation of the new weighted and the traditional unweighted measures, they investigated the scaling properties of both. The results illustrate better scaling with the weighted measure, in particular because the weighted measure is not subject to the floor effects that occur with the traditional measure when applied to samples that are difficult to transcribe. Furthermore, the new weighted measure shows orderly relations in degree of agreement across coded samples of early canonical-stage babbling, early meaningful speech in English, and 3 adult languages. The authors conclude that the weighted measure may provide improved foundations for research on phonetic transcription and for monitoring of transcription reliability.

  13. Atmospheric sound propagation

    NASA Technical Reports Server (NTRS)

    Cook, R. K.

    1969-01-01

    The propagation of sound waves at infrasonic frequencies (oscillation periods 1.0 - 1000 seconds) in the atmosphere is being studied by a network of seven stations separated geographically by distances of the order of thousands of kilometers. The stations measure the following characteristics of infrasonic waves: (1) the amplitude and waveform of the incident sound pressure, (2) the direction of propagation of the wave, (3) the horizontal phase velocity, and (4) the distribution of sound wave energy at various frequencies of oscillation. Some infrasonic sources which were identified and studied include the aurora borealis, tornadoes, volcanos, gravity waves on the oceans, earthquakes, and atmospheric instability waves caused by winds at the tropopause. Waves of unknown origin seem to radiate from several geographical locations, including one in the Argentine.

  14. 40 CFR 205.54-2 - Sound data acquisition system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meets the “fast” dynamic requirement of a precision sound level meter indicating meter system for the... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Sound data acquisition system. 205.54... data acquisition system. (a) Systems employing tape recorders and graphic level recorders may be...

  15. An Analysis of Sound Exposure in a University Music Rehearsal

    ERIC Educational Resources Information Center

    Farmer, Joe; Thrasher, Michael; Fumo, Nelson

    2014-01-01

    Exposure to high sound levels may lead to a variety of hearing abnormalities, including Noise-Induced Hearing Loss (NIHL). Pre-professional university music majors may experience frequent exposure to elevated sound levels, and this may have implications on their future career prospects (Jansen, Helleman, Dreschler & de Laat, 2009). Studies…

  16. Sounding Equipment Studies,

    DTIC Science & Technology

    1967-11-06

    considered: 1. Single sounding head per craft r2. Multiple sounding heads per craft (paravanes I; or bar) 3. Mother craft-manned daughter boats 4... Mother craft-unmanned daughter boats S.... 5. Craft refueling at mother ship 6. Craft refueling (and crew change) by logistics boat. 4 - śI 7. Various...sensor costs, then are simply C = KmCs L/Ls msnh sn (27)i n where L = Useful life of sensorsn KM = 1.0 plus fraction of cost allocated to repair

  17. The heart sound preprocessor

    NASA Technical Reports Server (NTRS)

    Chen, W. T.

    1972-01-01

    Technology developed for signal and data processing was applied to diagnostic techniques in the area of phonocardiography (pcg), the graphic recording of the sounds of the heart generated by the functioning of the aortic and ventricular valves. The relatively broad bandwidth of the PCG signal (20 to 2000 Hz) was reduced to less than 100 Hz by the use of a heart sound envelope. The process involves full-wave rectification of the PCG signal, envelope detection of the rectified wave, and low pass filtering of the resultant envelope.

  18. The Imagery of Sound

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Automated Analysis Corporation's COMET is a suite of acoustic analysis software for advanced noise prediction. It analyzes the origin, radiation, and scattering of noise, and supplies information on how to achieve noise reduction and improve sound characteristics. COMET's Structural Acoustic Foam Engineering (SAFE) module extends the sound field analysis capability of foam and other materials. SAFE shows how noise travels while airborne, how it travels within a structure, and how these media interact to affect other aspects of the transmission of noise. The COMET software reduces design time and expense while optimizing a final product's acoustical performance. COMET was developed through SBIR funding and Langley Research Center for Automated Analysis Corporation.

  19. Sounding rockets in Antarctica

    NASA Technical Reports Server (NTRS)

    Alford, G. C.; Cooper, G. W.; Peterson, N. E.

    1982-01-01

    Sounding rockets are versatile tools for scientists studying the atmospheric region which is located above balloon altitudes but below orbital satellite altitudes. Three NASA Nike-Tomahawk sounding rockets were launched from Siple Station in Antarctica in an upper atmosphere physics experiment in the austral summer of 1980-81. The 110 kg payloads were carried to 200 km apogee altitudes in a coordinated project with Arcas rocket payloads and instrumented balloons. This Siple Station Expedition demonstrated the feasibility of launching large, near 1,000 kg, rocket systems from research stations in Antarctica. The remoteness of research stations in Antarctica and the severe environment are major considerations in planning rocket launching expeditions.

  20. Exposure to excessive sounds and hearing status in academic classical music students.

    PubMed

    Pawlaczyk-Łuszczyńska, Małgorzata; Zamojska-Daniszewska, Małgorzata; Dudarewicz, Adam; Zaborowski, Kamil

    2017-02-21

    The aim of this study was to assess hearing of music students in relation to their exposure to excessive sounds. Standard pure-tone audiometry (PTA) was performed in 168 music students, aged 22.5±2.5 years. The control group included 67 subjects, non-music students and non-musicians, aged 22.8±3.3 years. Data on the study subjects' musical experience, instruments in use, time of weekly practice and additional risk factors for noise-induced hearing loss (NIHL) were identified by means of a questionnaire survey. Sound pressure levels produced by various groups of instruments during solo and group playing were also measured and analyzed. The music students' audiometric hearing threshold levels (HTLs) were compared with the theoretical predictions calculated according to the International Organization for Standardization standard ISO 1999:2013. It was estimated that the music students were exposed for 27.1±14.3 h/week to sounds at the A-weighted equivalent-continuous sound pressure level of 89.9±6.0 dB. There were no significant differences in HTLs between the music students and the control group in the frequency range of 4000-8000 Hz. Furthermore, in each group HTLs in the frequency range 1000-8000 Hz did not exceed 20 dB HL in 83% of the examined ears. Nevertheless, high frequency notched audiograms typical of the noise-induced hearing loss were found in 13.4% and 9% of the musicians and non-musicians, respectively. The odds ratio (OR) of notching in the music students increased significantly along with higher sound pressure levels (OR = 1.07, 95% confidence interval (CI): 1.014-1.13, p < 0.05). The students' HTLs were worse (higher) than those of a highly screened non-noise-exposed population. Moreover, their hearing loss was less severe than that expected from sound exposure for frequencies of 3000 Hz and 4000 Hz, and it was more severe in the case of frequency of 6000 Hz. The results confirm the need for further studies and development of a hearing

  1. Sound absorption coefficient of coal bottom ash concrete for railway application

    NASA Astrophysics Data System (ADS)

    Ramzi Hannan, N. I. R.; Shahidan, S.; Maarof, Z.; Ali, N.; Abdullah, S. R.; Ibrahim, M. H. Wan

    2017-11-01

    A porous concrete able to reduce the sound wave that pass through it. When a sound waves strike a material, a portion of the sound energy was reflected back and another portion of the sound energy was absorbed by the material while the rest was transmitted. The larger portion of the sound wave being absorbed, the lower the noise level able to be lowered. This study is to investigate the sound absorption coefficient of coal bottom ash (CBA) concrete compared to the sound absorption coefficient of normal concrete by carried out the impedance tube test. Hence, this paper presents the result of the impedance tube test of the CBA concrete and normal concrete.

  2. Exploring Sound with Insects

    ERIC Educational Resources Information Center

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  3. Creative Sound Dramatics

    ERIC Educational Resources Information Center

    Hendrix, Rebecca; Eick, Charles

    2014-01-01

    Sound propagation is not easy for children to understand because of its abstract nature, often best represented by models such as wave drawings and particle dots. Teachers Rebecca Hendrix and Charles Eick wondered how science inquiry, when combined with an unlikely discipline like drama, could produce a better understanding among their…

  4. Creating A Choral Sound.

    ERIC Educational Resources Information Center

    Leenman, Tracy E.

    1996-01-01

    Covers a variety of strategies for creating a unique and identifiable choral sound. Provides specific instructions for developing singing in unison and recommends a standing arrangement of soprano, alto, tenor, and bass quartets. Provides other tips for instrumentation, sight reading, and quality rehearsal time. (MJP)

  5. Photoacoustic Sounds from Meteors.

    SciTech Connect

    Spalding, Richard E.; Tencer, John; Sweatt, William C.

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appearmore » to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.« less

  6. Making Sense of Sound

    ERIC Educational Resources Information Center

    Menon, Deepika; Lankford, Deanna

    2016-01-01

    From the earliest days of their lives, children are exposed to all kinds of sound, from soft, comforting voices to the frightening rumble of thunder. Consequently, children develop their own naïve explanations largely based upon their experiences with phenomena encountered every day. When new information does not support existing conceptions,…

  7. Second sound tracking system

    NASA Astrophysics Data System (ADS)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  8. About sound mufflers sound-absorbing panels aircraft engine

    NASA Astrophysics Data System (ADS)

    Dudarev, A. S.; Bulbovich, R. V.; Svirshchev, V. I.

    2016-10-01

    The article provides a formula for calculating the frequency of sound absorbed panel with a perforated wall. And although the sound absorbing structure is a set of resonators Helmholtz, not individual resonators should be considered in acoustic calculations, and all the perforated wall panel. The analysis, showing how the parameters affect the size and sound-absorbing structures in the absorption rate.

  9. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  10. Understanding environmental sounds in sentence context.

    PubMed

    Uddin, Sophia; Heald, Shannon L M; Van Hedger, Stephen C; Klos, Serena; Nusbaum, Howard C

    2018-03-01

    There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. An assessment of dairy herd bulls in southern Australia: 2. Analysis of bull- and herd-level risk factors and their associations with pre- and postmating breeding soundness results.

    PubMed

    Hancock, A S; Younis, P J; Beggs, D S; Mansell, P D; Stevenson, M A; Pyman, M F

    2016-12-01

    In pasture-based, seasonally calving dairy herds of southern Australia, the mating period usually consists of an initial artificial insemination period followed by a period of natural service using herd bulls. The primary objective of this study was to identify associations between individual bull- and herd-level management factors and bull fertility as measured by a pre- and postmating bull breeding soundness evaluation (BBSE). Multivariable mixed effects logistic regression models were used to identify factors associated with bulls being classified as high risk of reduced fertility at the premating and postmating BBSE. Bulls older than 4 yr of age at the premating BBSE were more likely to be classified high risk compared with bulls less than 4 yr of age. Bulls that were in herds in which concentrates were fed before mating were more likely to be classified as high risk at the postmating BBSE compared with bulls that were in herds where concentrates were not fed. Univariable analyses also identified areas in need of further research, including breed differences between dairy bulls, leg conformation and joint abnormalities, preventative hoof blocking for bulls, and mating ratios. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Sounds of the Ancient Universe

    NASA Image and Video Library

    2013-03-21

    Tones represents sound waves that traveled through the early universe, and were later heard by ESA Planck space telescope. The primordial sound waves have been translated into frequencies we can hear.

  13. On Sound Reflection in Superfluid

    NASA Astrophysics Data System (ADS)

    Melnikovsky, L. A.

    2008-02-01

    We consider reflection of first and second sound waves by a rigid flat wall in superfluid. A nontrivial dependence of the reflection coefficients on the angle of incidence is obtained. Sound conversion is predicted at slanted incidence.

  14. Comprehensive measures of sound exposures in cinemas using smart phones.

    PubMed

    Huth, Markus E; Popelka, Gerald R; Blevins, Nikolas H

    2014-01-01

    Sensorineural hearing loss from sound overexposure has a considerable prevalence. Identification of sound hazards is crucial, as prevention, due to a lack of definitive therapies, is the sole alternative to hearing aids. One subjectively loud, yet little studied, potential sound hazard is movie theaters. This study uses smart phones to evaluate their applicability as a widely available, validated sound pressure level (SPL) meter. Therefore, this study measures sound levels in movie theaters to determine whether sound levels exceed safe occupational noise exposure limits and whether sound levels in movie theaters differ as a function of movie, movie theater, presentation time, and seat location within the theater. Six smart phones with an SPL meter software application were calibrated with a precision SPL meter and validated as an SPL meter. Additionally, three different smart phone generations were measured in comparison to an integrating SPL meter. Two different movies, an action movie and a children's movie, were measured six times each in 10 different venues (n = 117). To maximize representativeness, movies were selected focusing on large release productions with probable high attendance. Movie theaters were selected in the San Francisco, CA, area based on whether they screened both chosen movies and to represent the largest variety of theater proprietors. Measurements were analyzed in regard to differences between theaters, location within the theater, movie, as well as presentation time and day as indirect indicator of film attendance. The smart phone measurements demonstrated high accuracy and reliability. Overall, sound levels in movie theaters do not exceed safe exposure limits by occupational standards. Sound levels vary significantly across theaters and demonstrated statistically significant higher sound levels and exposures in the action movie compared to the children's movie. Sound levels decrease with distance from the screen. However, no influence on

  15. Distress sounds of thorny catfishes emitted underwater and in air: characteristics and potential significance.

    PubMed

    Knight, Lisa; Ladich, Friedrich

    2014-11-15

    Thorny catfishes produce stridulation (SR) sounds using their pectoral fins and drumming (DR) sounds via a swimbladder mechanism in distress situations when hand held in water and in air. It has been argued that SR and DR sounds are aimed at different receivers (predators) in different media. The aim of this study was to analyse and compare sounds emitted in both air and water in order to test different hypotheses on the functional significance of distress sounds. Five representatives of the family Doradidae were investigated. Fish were hand held and sounds emitted in air and underwater were recorded (number of sounds, sound duration, dominant and fundamental frequency, sound pressure level and peak-to-peak amplitudes). All species produced SR sounds in both media, but DR sounds could not be recorded in air for two species. Differences in sound characteristics between media were small and mainly limited to spectral differences in SR. The number of sounds emitted decreased over time, whereas the duration of SR sounds increased. The dominant frequency of SR and the fundamental frequency of DR decreased and sound pressure level of SR increased with body size across species. The hypothesis that catfish produce more SR sounds in air and more DR sounds in water as a result of different predation pressure (birds versus fish) could not be confirmed. It is assumed that SR sounds serve as distress sounds in both media, whereas DR sounds might primarily be used as intraspecific communication signals in water in species possessing both mechanisms. © 2014. Published by The Company of Biologists Ltd.

  16. Just How Does Sound Wave?

    ERIC Educational Resources Information Center

    Shipman, Bob

    2006-01-01

    When children first hear the term "sound wave" perhaps they might associate it with the way a hand waves or perhaps the squiggly line image on a television monitor when sound recordings are being made. Research suggests that children tend to think sound somehow travels as a discrete package, a fast-moving invisible thing, and not something that…

  17. Sounds Alive: A Noise Workbook.

    ERIC Educational Resources Information Center

    Dickman, Donna McCord

    Sarah Screech, Danny Decibel, Sweetie Sound and Neil Noisy describe their experiences in the world of sound and noise to elementary students. Presented are their reports, games and charts which address sound measurement, the effects of noise on people, methods of noise control, and related areas. The workbook is intended to stimulate students'…

  18. THE SOUND PATTERN OF ENGLISH.

    ERIC Educational Resources Information Center

    CHOMSKY, NOAM; HALLE, MORRIS

    "THE SOUND PATTERN OF ENGLISH" PRESENTS A THEORY OF SOUND STRUCTURE AND A DETAILED ANALYSIS OF THE SOUND STRUCTURE OF ENGLISH WITHIN THE FRAMEWORK OF GENERATIVE GRAMMAR. IN THE PREFACE TO THIS BOOK THE AUTHORS STATE THAT THEIR "WORK IN THIS AREA HAS REACHED A POINT WHERE THE GENERAL OUTLINES AND MAJOR THEORETICAL PRINCIPLES ARE FAIRLY CLEAR" AND…

  19. Data sonification and sound visualization.

    SciTech Connect

    Kaper, H. G.; Tipei, S.; Wiebel, E.

    1999-07-01

    Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.

  20. Multichannel sound reinforcement systems at work in a learning environment

    NASA Astrophysics Data System (ADS)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  1. Light aircraft sound transmission studies - Noise reduction model

    NASA Technical Reports Server (NTRS)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1987-01-01

    Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.

  2. Wood for sound.

    PubMed

    Wegst, Ulrike G K

    2006-10-01

    The unique mechanical and acoustical properties of wood and its aesthetic appeal still make it the material of choice for musical instruments and the interior of concert halls. Worldwide, several hundred wood species are available for making wind, string, or percussion instruments. Over generations, first by trial and error and more recently by scientific approach, the most appropriate species were found for each instrument and application. Using material property charts on which acoustic properties such as the speed of sound, the characteristic impedance, the sound radiation coefficient, and the loss coefficient are plotted against one another for woods. We analyze and explain why spruce is the preferred choice for soundboards, why tropical species are favored for xylophone bars and woodwind instruments, why violinists still prefer pernambuco over other species as a bow material, and why hornbeam and birch are used in piano actions.

  3. Environmentally sound manufacturing

    NASA Technical Reports Server (NTRS)

    Caddy, Larry A.; Bowman, Ross; Richards, Rex A.

    1994-01-01

    The NASA/Thiokol/industry team has developed and started implementation of an environmentally sound manufacturing plan for the continued production of solid rocket motors. They have worked with other industry representatives and the U.S. Environmental Protection Agency to prepare a comprehensive plan to eliminate all ozone depleting chemicals from manufacturing processes and to reduce the use of other hazardous materials used to produce the space shuttle reusable solid rocket motors. The team used a classical approach for problem solving combined with a creative synthesis of new approaches to attack this problem. As our ability to gather data on the state of the Earth's environmental health increases, environmentally sound manufacturing must become an integral part of the business decision making process.

  4. Sound source measurement by using a passive sound insulation and a statistical approach

    NASA Astrophysics Data System (ADS)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  5. The Sounds of Space

    NASA Astrophysics Data System (ADS)

    Gurnett, Donald

    2009-11-01

    The popular concept of space is that it is a vacuum, with nothing of interest between the stars, planets, moons and other astronomical objects. In fact most of space is permeated by plasma, sometimes quite dense, as in the solar corona and planetary ionospheres, and sometimes quite tenuous, as is in planetary radiation belts. Even less well known is that these space plasmas support and produce an astonishing large variety of waves, the ``sounds of space.'' In this talk I will give you a tour of these space sounds, starting with the very early discovery of ``whistlers'' nearly a century ago, and proceeding through my nearly fifty years of research on space plasma waves using spacecraft-borne instrumentation. In addition to being of scientific interest, some of these sounds can even be described as ``musical,'' and have served as the basis for various musical compositions, including a production called ``Sun Rings,'' written by the well-known composer Terry Riley, that has been performed by the Kronos Quartet to audiences all around the world.

  6. The Warm, Rich Sound of Valve Guitar Amplifiers

    ERIC Educational Resources Information Center

    Keeports, David

    2017-01-01

    Practical solid state diodes and transistors have made glass valve technology nearly obsolete. Nevertheless, valves survive largely because electric guitar players much prefer the sound of valve amplifiers to the sound of transistor amplifiers. This paper discusses the introductory-level physics behind that preference. Overdriving an amplifier…

  7. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  8. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  9. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  10. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  11. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  12. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  13. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  14. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  15. Sound exposure during outdoor music festivals.

    PubMed

    Tronstad, Tron V; Gelderblom, Femke B

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  16. Sound Exposure During Outdoor Music Festivals

    PubMed Central

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  17. Why Do People Like Loud Sound? A Qualitative Study

    PubMed Central

    Welch, David; Fremaux, Guy

    2017-01-01

    Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address. PMID:28800097

  18. Why Do People Like Loud Sound? A Qualitative Study.

    PubMed

    Welch, David; Fremaux, Guy

    2017-08-11

    Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address.

  19. CAVITATION SOUNDS DURING CERVICOTHORACIC SPINAL MANIPULATION

    PubMed Central

    Mourad, Firas; Zingoni, Andrea; Iorio, Raffaele; Perreault, Thomas; Zacharko, Noah; de las Peñas, César Fernández; Butts, Raymond; Cleland, Joshua A.

    2017-01-01

    frequencies for all 58 manipulations. Discussion Cavitation was significantly more likely to occur unilaterally, and on the side contralateral to the short-lever applicator contact, during cervicothoracic HVLA thrust manipulation. Clinicians should expect multiple cavitation sounds when performing HVLA thrust manipulation to the CTJ. Due to the presence of multi-peak energy bursts and sounds of multiple frequencies, the cavitation hypothesis (i.e. intra-articular gas bubble collapse) alone appears unable to explain all of the audible sounds during HVLA thrust manipulation, and the possibility remains that several phenomena may be occurring simultaneously. Level of Evidence 2b PMID:28900571

  20. Psychophysiological acoustics of indoor sound due to traffic noise during sleep

    NASA Astrophysics Data System (ADS)

    Tulen, J. H. M.; Kumar, A.; Jurriëns, A. A.

    1986-10-01

    The relation between the physical characteristics of sound and an individual's perception of its as annoyance is complex and unclear. Sleep disturbance by sound is manifested in the physiological responses to the sound stimuli and the quality of sleep perceived in the morning. Both may result in deterioration of functioning during wakefulness. Therefore, psychophysiological responses to noise during sleep should be studied for the evaluation of the efficacy of sound insulation. Nocturnal sleep and indoor sound level were recorded in the homes of 12 subjects living along a highway with high traffic density. Double glazing sound insulation was used to create two experimental conditions: low insulation and high insulation. Twenty recordings were made per subject, ten recordings in each condition. During the nights with low insulation the quality of sleep was so low that both performance and mood were negatively affected. The enhancement of sound insulation was not effective enough to increase the restorative effects of sleep. The transient and peaky characteristics of traffic sound were also found to result in non-adaptive physiological responses during sleep. Sound insulation did have an effect on noise peak characteristics such as peak level, peak duration and slope. However, the number of sound peaks were found to be the same in both conditions. The relation of these sound peaks detected in the indoor recorded sound level signal to characteristics of passing vehicles was established, indicating that the sound peaks causing the psychophysiological disturbances during sleep were generated by the passing vehicles. Evidence is presented to show that the reduction in sound level is not a good measure of efficacy of sound insulation. The parameters of the sound peaks, as described in this paper, are a better representation of psychophysiological efficacy of sound insulation.

  1. Evaluation of smartphone sound measurement applications.

    PubMed

    Kardous, Chucri A; Shaw, Peter B

    2014-04-01

    This study reports on the accuracy of smartphone sound measurement applications (apps) and whether they can be appropriately employed for occupational noise measurements. A representative sample of smartphones and tablets on various platforms were acquired, more than 130 iOS apps were evaluated but only 10 apps met our selection criteria. Only 4 out of 62 Android apps were tested. The results showed two apps with mean differences of 0.07 dB (unweighted) and -0.52 dB (A-weighted) from the reference values. Two other apps had mean differences within ±2 dB. The study suggests that certain apps may be appropriate for use in occupational noise measurements.

  2. The sound exposure of the audience at a music festival.

    PubMed

    Mercier, V; Luy, D; Hohmann, B W

    2003-01-01

    During the Paleo Festival in Nyon, Switzerland, which took place from 24th to 29th July 2001, ten volunteers were equipped each evening with small sound level meters which continuously monitored their sound exposure as they circulated among the various festival events. Sound levels at the mixing console and at the place where people are most heavily exposed (in front of the speakers) were measured simultaneously. In addition, a sample of 601 people from the audience were interviewed over the six days of the festival and asked their opinion of sound level and quality, as well as provide details of where in the arena they preferred to listen to the concerts, whether they used ear plugs, if they had experienced any tinnitus, and if so how long it had persisted. The individual sound exposure during a typical evening was on average 95 dB(A) although 8% of the volunteers were exposed to sound levels higher then 100 dB(A). Only 5% of the audience wore ear plugs throughout the concert while 34% used them occasionally. While some 36% of the people interviewed reported that they had experienced tinnitus after listening to loud music, the majority found both the music quality and the sound level good. The sound level limit of 100 dB(A) at the place where the people are most heavily exposed seems to be a good compromise between the public heath issue, the demands of artists and organisers, and the expectations of the public. However, considering the average sound levels to which the public are exposed during a single evening, it is recommended that ear plugs be used by concert-goers who attend more than one day of the festival.

  3. Sound of photosynthesis

    SciTech Connect

    Amato, I.

    1989-01-01

    The beauty of photosynthesis runs deep into its physicochemical details, many of which continue to elude scientific understanding. One of the big unsolved mysteries of photosynthesis is how the oxygen molecules are made, remarks David Mauzerall, a biophysicist at Rockefeller University in New York City. He and his colleagues, Ora Canaani and Shmuel Malkin, both biochemists at the Weizmann Institute of Science in Rehovot, Israel, are shining some light on this mystery. Using a technique called pulsed photoacoustic spectroscopy, the three researchers have eavesdropped on some of the intimate details of oxygen evolution. You can now hear the sound ofmore » oxygen coming out of the leaves, Mauzerall said in an interview. Mauzerall and co-workers reported their work last summer in the Proceedings of the National Academy of Sciences. Did he say, hear oxygen. As its name implies, photoacoustic spectroscopy is a sound-from-light technique. It is especially suited for getting spectra from samples like leaves that mess up the incident badly that even scattering or reflection-based spectroscopic methods usually can't reveal too much about the plant's chemical personality.« less

  4. Leveling

    USGS Publications Warehouse

    1966-01-01

    Geodetic leveling by the U.S. Geological Survey provides a framework of accurate elevations for topographic mapping. Elevations are referred to the Sea Level Datum of 1929. Lines of leveling may be run either with automatic or with precise spirit levels, by either the center-wire or the three-wire method. For future use, the surveys are monumented with bench marks, using standard metal tablets or other marking devices. The elevations are adjusted by least squares or other suitable method and are published in lists of control.

  5. Is low frequency ocean sound increasing globally?

    PubMed

    Miksis-Olds, Jennifer L; Nichols, Stephen M

    2016-01-01

    Low frequency sound has increased in the Northeast Pacific Ocean over the past 60 yr [Ross (1993) Acoust. Bull. 18, 5-8; (2005) IEEE J. Ocean. Eng. 30, 257-261; Andrew, Howe, Mercer, and Dzieciuch (2002) J. Acoust. Soc. Am. 129, 642-651; McDonald, Hildebrand, and Wiggins (2006) J. Acoust. Soc. Am. 120, 711-717; Chapman and Price (2011) J. Acoust. Soc. Am. 129, EL161-EL165] and in the Indian Ocean over the past decade, [Miksis-Olds, Bradley, and Niu (2013) J. Acoust. Soc. Am. 134, 3464-3475]. More recently, Andrew, Howe, and Mercer's [(2011) J. Acoust. Soc. Am. 129, 642-651] observations in the Northeast Pacific show a level or slightly decreasing trend in low frequency noise. It remains unclear what the low frequency trends are in other regions of the world. In this work, data from the Comprehensive Nuclear-Test Ban Treaty Organization International Monitoring System was used to examine the rate and magnitude of change in low frequency sound (5-115 Hz) over the past decade in the South Atlantic and Equatorial Pacific Oceans. The dominant source observed in the South Atlantic was seismic air gun signals, while shipping and biologic sources contributed more to the acoustic environment at the Equatorial Pacific location. Sound levels over the past 5-6 yr in the Equatorial Pacific have decreased. Decreases were also observed in the ambient sound floor in the South Atlantic Ocean. Based on these observations, it does not appear that low frequency sound levels are increasing globally.

  6. Sound therapies for tinnitus management.

    PubMed

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  7. Amplitude modulation of sound from wind turbines under various meteorological conditions.

    PubMed

    Larsson, Conny; Öhlund, Olof

    2014-01-01

    Wind turbine (WT) sound annoys some people even though the sound levels are relatively low. This could be because of the amplitude modulated "swishing" characteristic of the turbine sound, which is not taken into account by standard procedures for measuring average sound levels. Studies of sound immission from WTs were conducted continually between 19 August 2011 and 19 August 2012 at two sites in Sweden. A method for quantifying the degree and strength of amplitude modulation (AM) is introduced here. The method reveals that AM at the immission points occur under specific meteorological conditions. For WT sound immission, the wind direction and sound speed gradient are crucial for the occurrence of AM. Interference between two or more WTs could probably enhance AM. The mechanisms by which WT sound is amplitude modulated are not fully understood.

  8. Musical Sound, Instruments, and Equipment

    NASA Astrophysics Data System (ADS)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  9. Designing Trend-Monitoring Sounds for Helicopters: Methodological Issues and an Application

    ERIC Educational Resources Information Center

    Edworthy, Judy; Hellier, Elizabeth; Aldrich, Kirsteen; Loxley, Sarah

    2004-01-01

    This article explores methodological issues in sonification and sound design arising from the design of helicopter monitoring sounds. Six monitoring sounds (each with 5 levels) were tested for similarity and meaning with 3 different techniques: hierarchical cluster analysis, linkage analysis, and multidimensional scaling. In Experiment 1,…

  10. 33 CFR 86.05 - Sound signal intensity and range of audibility.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... direction of the forward axis of the whistle and at a distance of 1 meter from it, a sound pressure level in... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signal intensity and range... HOMELAND SECURITY INLAND NAVIGATION RULES ANNEX III: TECHNICAL DETAILS OF SOUND SIGNAL APPLIANCES Whistles...

  11. Underwater sound of rigid-hulled inflatable boats.

    PubMed

    Erbe, Christine; Liong, Syafrin; Koessler, Matthew Walter; Duncan, Alec J; Gourlay, Tim

    2016-06-01

    Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10-48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10-56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.

  12. Long Island Sound Tropospheric Ozone Study (LISTOS) Fact Sheet

    EPA Pesticide Factsheets

    EPA scientists are collaborating on a multi-agency field study to investigate the complex interaction of emissions, chemistry and meteorological factors contributing to elevated ozone levels along the Long Island Sound shoreline.

  13. Application of a finite-element model to low-frequency sound insulation in dwellings.

    PubMed

    Maluski, S P; Gibbs, B M

    2000-10-01

    The sound transmission between adjacent rooms has been modeled using a finite-element method. Predicted sound-level difference gave good agreement with experimental data using a full-scale and a quarter-scale model. Results show that the sound insulation characteristics of a party wall at low frequencies strongly depend on the modal characteristics of the sound field of both rooms and of the partition. The effect of three edge conditions of the separating wall on the sound-level difference at low frequencies was examined: simply supported, clamped, and a combination of clamped and simply supported. It is demonstrated that a clamped partition provides greater sound-level difference at low frequencies than a simply supported. It also is confirmed that the sound-pressure level difference is lower in equal room than in unequal room configurations.

  14. Biological Sampling and Analysis in Sinclair and Dyes Inlets, Washington: Chemical Analyses for 2007 Puget Sound Biota Study

    SciTech Connect

    Brandenberger, Jill M.; Suslick, Carolynn R.; Johnston, Robert K.

    2008-10-09

    Evaluating spatial and temporal trends in contaminant residues in Puget Sound fish and macroinvertebrates are the objectives of the Puget Sound Ambient Monitoring Program (PSAMP). In a cooperative effort between the ENVironmental inVESTment group (ENVVEST) and Washington State Department of Fish and Wildlife, additional biota samples were collected during the 2007 PSAMP biota survey and analyzed for chemical residues and stable isotopes of carbon (δ13C) and nitrogen (δ15N). Approximately three specimens of each species collected from Sinclair Inlet, Georgia Basin, and reference locations in Puget Sound were selected for whole body chemical analysis. The muscle tissue of specimens selected formore » chemical analyses were also analyzed for δ13C and δ15N to provide information on relative trophic level and food sources. This data report summarizes the chemical residues for the 2007 PSAMP fish and macro-invertebrate samples. In addition, six Spiny Dogfish (Squalus acanthias) samples were necropsied to evaluate chemical residue of various parts of the fish (digestive tract, liver, embryo, muscle tissue), as well as, a weight proportional whole body composite (WBWC). Whole organisms were homogenized and analyzed for silver, arsenic, cadmium, chromium, copper, nickel, lead, zinc, mercury, 19 polychlorinated biphenyl (PCB) congeners, PCB homologues, percent moisture, percent lipids, δ13C, and δ15N.« less

  15. Velocity and attenuation of sound in arterial tissues

    NASA Technical Reports Server (NTRS)

    Rooney, J. A.; Gammell, P. M.; Hestenes, J. D.; Chin, H. P.; Blankenhorn, D. H.

    1982-01-01

    The velocity of sound in excised human and canine arterial tissues is measured in order to serve as a basis for the development and application of ultrasonic techniques for the diagnosis of atherosclerotic lesions. Measurements of sound velocity at different regions of 11 human and six canine aortas were made by a time delay spectrometer technique at frequencies from 2 to 10 MHz, and compared with ultrasonic attenuation parameters and the results of biochemical assays. Sound velocity is found to increase with increasing attenuation at all frequencies, and with increasing collagen content. A strong dependence of sound velocity on cholesterol content or low calcium contents is not observed, although velocities of up to 2000 m/sec are observed in highly organized calcified lesions. A decrease in velocity with decreasing temperature is also noted. It is thus concluded that it is principally the differences in tissue collagen levels that contribute to image formation according to sound velocity.

  16. The monster sound pipe

    NASA Astrophysics Data System (ADS)

    Ruiz, Michael J.; Perkins, James

    2017-03-01

    Producing a deep bass tone by striking a large 3 m (10 ft) flexible corrugated drainage pipe immediately grabs student attention. The fundamental pitch of the corrugated tube is found to be a semitone lower than a non-corrugated smooth pipe of the same length. A video (https://youtu.be/FU7a9d7N60Y) of the demonstration is included, which illustrates how an Internet keyboard can be used to estimate the fundamental pitches of each pipe. Since both pipes have similar end corrections, the pitch discrepancy between the smooth pipe and drainage tube is due to the corrugations, which lower the speed of sound inside the flexible tube, dropping its pitch a semitone.

  17. The sounds of science

    NASA Astrophysics Data System (ADS)

    Carlowicz, Michael

    As scientists carefully study some aspects of the ocean environment, are they unintentionally distressing others? That is a question to be answered by Robert Benson and his colleagues in the Center for Bioacoustics at Texas A&M University.With help from a 3-year, $316,000 grant from the U.S. Office of Naval Research, Benson will study how underwater noise produced by naval operations and other sources may affect marine mammals. In Benson's study, researchers will generate random sequences of low-frequency, high-intensity (180-decibel) sounds in the Gulf of Mexico, working at an approximate distance of 1 km from sperm whale herds. Using an array of hydrophones, the scientists will listen to the characteristic clicks and whistles of the sperm whales to detect changes in the animals' direction, speed, and depth, as derived from fluctuations in their calls.

  18. Sounds Clear Enough

    NASA Technical Reports Server (NTRS)

    Zak, Alan

    2004-01-01

    I'm a vice president at Line6, where we produce electronics for musical instruments. My company recently developed a guitar that can be programmed to sound like twenty-five different classic guitars - everything from a 1928 National 'Tricone' to a 1970 Martin. It is quite an amazing piece of technology. The guitar started as a research project because we needed to know if the technology was going to be viable and if the guitar design was going to be practical. I've been in this business for about twenty years now, and I still enjoy starting up projects whenever the opportunity presents itself. During the research phase, I headed up the project myself. Once we completed our preliminary research and made the decision to move into development, that's when I handed the project off - and that's where this story really begins.

  19. Mercury in Long Island Sound sediments

    USGS Publications Warehouse

    Varekamp, J.C.; Buchholtz ten Brink, Marilyn R.; Mecray, E.I.; Kreulen, B.

    2000-01-01

    Mercury (Hg) concentrations were measured in 394 surface and core samples from Long Island Sound (LIS). The surface sediment Hg concentration data show a wide spread, ranging from 600 ppb Hg in westernmost LIS. Part of the observed range is related to variations in the bottom sedimentary environments, with higher Hg concentrations in the muddy depositional areas of central and western LIS. A strong residual trend of higher Hg values to the west remains when the data are normalized to grain size. Relationships between a tracer for sewage effluents (C. perfringens) and Hg concentrations indicate that between 0-50 % of the Hg is derived from sewage sources for most samples from the western and central basins. A higher percentage of sewage-derived Hg is found in samples from the westernmost section of LIS and in some local spots near urban centers. The remainder of the Hg is carried into the Sound with contaminated sediments from the watersheds and a small fraction enters the Sound as in situ atmospheric deposition. The Hg-depth profiles of several cores have well-defined contamination profiles that extend to pre-industrial background values. These data indicate that the Hg levels in the Sound have increased by a factor of 5-6 over the last few centuries, but Hg levels in LIS sediments have declined in modern times by up to 30 %. The concentrations of C. perfringens increased exponentially in the top core sections which had declining Hg concentrations, suggesting a recent decline in Hg fluxes that are unrelated to sewage effluents. The observed spatial and historical trends show Hg fluxes to LIS from sewage effluents, contaminated sediment input from the Connecticut River, point source inputs of strongly contaminated sediment from the Housatonic River, variations in the abundance of Hg carrier phases such as TOC and Fe, and focusing of sediment-bound Hg in association with westward sediment transport within the Sound.

  20. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    PubMed

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  1. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  2. Radio Sounding Science at High Powers

    NASA Technical Reports Server (NTRS)

    Green, J. L.; Reinisch, B. W.; Song, P.; Fung, S. F.; Benson, R. F.; Taylor, W. W. L.; Cooper, J. F.; Garcia, L.; Markus, T.; Gallagher, D. L.

    2004-01-01

    Future space missions like the Jupiter Icy Moons Orbiter (JIMO) planned to orbit Callisto, Ganymede, and Europa can fully utilize a variable power radio sounder instrument. Radio sounding at 1 kHz to 10 MHz at medium power levels (10 W to kW) will provide long-range magnetospheric sounding (several Jovian radii) like those first pioneered by the radio plasma imager instrument on IMAGE at low power (less than l0 W) and much shorter distances (less than 5 R(sub E)). A radio sounder orbiting a Jovian icy moon would be able to globally measure time-variable electron densities in the moon ionosphere and the local magnetospheric environment. Near-spacecraft resonance and guided echoes respectively allow measurements of local field magnitude and local field line geometry, perturbed both by direct magnetospheric interactions and by induced components from subsurface oceans. JIMO would allow radio sounding transmissions at much higher powers (approx. 10 kW) making subsurface sounding of the Jovian icy moons possible at frequencies above the ionosphere peak plasma frequency. Subsurface variations in dielectric properties, can be probed for detection of dense and solid-liquid phase boundaries associated with oceans and related structures in overlying ice crusts.

  3. Theoretical Modelling of Sound Radiation from Plate

    NASA Astrophysics Data System (ADS)

    Zaman, I.; Rozlan, S. A. M.; Yusoff, A.; Madlan, M. A.; Chan, S. W.

    2017-01-01

    Recently the development of aerospace, automotive and building industries demands the use of lightweight materials such as thin plates. However, the plates can possibly add to significant vibration and sound radiation, which eventually lead to increased noise in the community. So, in this study, the fundamental concept of sound pressure radiated from a simply-supported thin plate (SSP) was analyzed using the derivation of mathematical equations and numerical simulation of ANSYS®. The solution to mathematical equations of sound radiated from a SSP was visualized using MATLAB®. The responses of sound pressure level were measured at far field as well as near field in the frequency range of 0-200 Hz. Result shows that there are four resonance frequencies; 12 Hz, 60 Hz, 106 Hz and 158 Hz were identified which represented by the total number of the peaks in the frequency response function graph. The outcome also indicates that the mathematical derivation correlated well with the simulation model of ANSYS® in which the error found is less than 10%. It can be concluded that the obtained model is reliable and can be applied for further analysis such as to reduce noise emitted from a vibrating thin plate.

  4. Applying cybernetic technology to diagnose human pulmonary sounds.

    PubMed

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  5. Effects of Sound on the Behavior of Wild, Unrestrained Fish Schools.

    PubMed

    Roberts, Louise; Cheesman, Samuel; Hawkins, Anthony D

    2016-01-01

    To assess and manage the impact of man-made sounds on fish, we need information on how behavior is affected. Here, wild unrestrained pelagic fish schools were observed under quiet conditions using sonar. Fish were exposed to synthetic piling sounds at different levels using custom-built sound projectors, and behavioral changes were examined. In some cases, the depth of schools changed after noise playback; full dispersal of schools was also evident. The methods we developed for examining the behavior of unrestrained fish to sound exposure have proved successful and may allow further testing of the relationship between responsiveness and sound level.

  6. Designing a Sound Reducing Wall

    ERIC Educational Resources Information Center

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  7. Noise levels in the learning-teaching activities in a dental medicine school

    NASA Astrophysics Data System (ADS)

    Matos, Andreia; Carvalho, Antonio P. O.; Fernandes, Joao C. S.

    2002-11-01

    The noise levels made by different clinical handpieces and laboratory engines are considered to be the main descriptors of acoustical comfort in learning spaces in a dental medicine school. Sound levels were measured in five types of classrooms and teaching laboratories at the University of Porto Dental Medicine School. Handpiece noise measurements were made while instruments were running free and during operations with cutting tools (tooth, metal, and acrylic). Noise levels were determined using a precision sound level meter, which was positioned at ear level and also at one-meter distance from the operator. Some of the handpieces were brand new and the others had a few years of use. The sound levels encountered were between 60 and 99 dB(A) and were compared with the noise limits in A-weighted sound pressure level for mechanical equipments installed in educational buildings included in the Portuguese Noise Code and in other European countries codes. The daily personal noise exposure levels (LEP,d) of the students and professors were calculated to be between 85 and 90 dB(A) and were compared with the European legal limits. Some noise limits for this type of environment are proposed and suggestions for the improvement of the acoustical environment are given.

  8. Contaminant loading to Puget Sound from two marinas. Puget Sound estuary program. Final report, June 1988-October 1988

    SciTech Connect

    Crecelius, E.A.; Fortman, T.J.; Kiesser, S.L.

    1989-07-01

    Concentrations of Cu, Pb, Zn, PAH's, TBT and FC bacteria were measured in surface sediment, sediment-trap, and water-column samples at two marinas in Puget Sound during summer of 1988. Levels of contaminants inside the marinas were compared with levels outside. TBT had greatest elevation in marina sediments compared to reference sediments. Few of sediments exceeded Puget Sound AET sediment quality values but most did exceed PSDDA screening levels for in-water disposal of dredged sediment. All marinas estimated to contribute less than one percent of total mass loading of Cu, Pb and Zn to main basin of Puget Sound. Contribution ofmore » TBT may be much more significant if antifouling paints are the major source for Puget Sound.« less

  9. Acoustic transistor: Amplification and switch of sound by sound

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Kan, Wei-wei; Zou, Xin-ye; Yin, Lei-lei; Cheng, Jian-chun

    2014-08-01

    We designed an acoustic transistor to manipulate sound in a manner similar to the manipulation of electric current by its electrical counterpart. The acoustic transistor is a three-terminal device with the essential ability to use a small monochromatic acoustic signal to control a much larger output signal within a broad frequency range. The output and controlling signals have the same frequency, suggesting the possibility of cascading the structure to amplify an acoustic signal. Capable of amplifying and switching sound by sound, acoustic transistors have various potential applications and may open the way to the design of conceptual devices such as acoustic logic gates.

  10. Sound transmission loss of composite sandwich panels

    NASA Astrophysics Data System (ADS)

    Zhou, Ran

    Light composite sandwich panels are increasingly used in automobiles, ships and aircraft, because of the advantages they offer of high strength-to-weight ratios. However, the acoustical properties of these light and stiff structures can be less desirable than those of equivalent metal panels. These undesirable properties can lead to high interior noise levels. A number of researchers have studied the acoustical properties of honeycomb and foam sandwich panels. Not much work, however, has been carried out on foam-filled honeycomb sandwich panels. In this dissertation, governing equations for the forced vibration of asymmetric sandwich panels are developed. An analytical expression for modal densities of symmetric sandwich panels is derived from a sixth-order governing equation. A boundary element analysis model for the sound transmission loss of symmetric sandwich panels is proposed. Measurements of the modal density, total loss factor, radiation loss factor, and sound transmission loss of foam-filled honeycomb sandwich panels with different configurations and thicknesses are presented. Comparisons between the predicted sound transmission loss values obtained from wave impedance analysis, statistical energy analysis, boundary element analysis, and experimental values are presented. The wave impedance analysis model provides accurate predictions of sound transmission loss for the thin foam-filled honeycomb sandwich panels at frequencies above their first resonance frequencies. The predictions from the statistical energy analysis model are in better agreement with the experimental transmission loss values of the sandwich panels when the measured radiation loss factor values near coincidence are used instead of the theoretical values for single-layer panels. The proposed boundary element analysis model provides more accurate predictions of sound transmission loss for the thick foam-filled honeycomb sandwich panels than either the wave impedance analysis model or the

  11. Precision of working memory for speech sounds.

    PubMed

    Joseph, Sabine; Iverson, Paul; Manohar, Sanjay; Fox, Zoe; Scott, Sophie K; Husain, Masud

    2015-01-01

    Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such "quantized" views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.

  12. Prediction on the Enhancement of the Impact Sound Insulation to a Floating Floor with Resilient Interlayer

    NASA Astrophysics Data System (ADS)

    Huang, Xianfeng; Meng, Yao; Huang, Riming

    2017-10-01

    This paper describes a theoretical method for predicting the improvement of the impact sound insulation to a floating floor with the resilient interlayer. Statistical energy analysis (SEA) model, which is skilful in calculating the floor impact sound, is set up for calculating the reduction in impact sound pressure level in downstairs room. The sound transmission paths which include direct path and flanking paths are analyzed to find the dominant one; the factors that affect impact sound reduction for a floating floor are explored. Then, the impact sound level in downstairs room is determined and comparisons between predicted and measured data are conducted. It is indicated that for the impact sound transmission across a floating floor, the flanking path impact sound level contribute tiny influence on overall sound level in downstairs room, and a floating floor with low stiffness interlayer exhibits favorable sound insulation on direct path. The SEA approach applies to the floating floors with resilient interlayers, which are experimentally verified, provides a guidance in sound insulation design.

  13. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics.

    PubMed

    Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity

  14. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    PubMed Central

    Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the

  15. Reviews Website: Online Graphing Calculator Video Clip: Learning From the News Phone App: Graphing Calculator Book: Challenge and Change: A History of the Nuffield A-Level Physics Project Book: SEP Sound Book: Reinventing Schools, Reforming Teaching Book: Physics and Technology for Future Presidents iPhone App: iSeismometer Web Watch

    NASA Astrophysics Data System (ADS)

    2011-01-01

    WE RECOMMEND Online Graphing Calculator Calculator plots online graphs Challenge and Change: A History of the Nuffield A-Level Physics Project Book delves deep into the history of Nuffield physics SEP Sound Booklet has ideas for teaching sound but lacks some basics Reinventing Schools, Reforming Teaching Fascinating book shows how politics impacts on the classroom Physics and Technology for Future Presidents A great book for teaching physics for the modern world iSeismometer iPhone app teaches students about seismic waves WORTH A LOOK Teachers TV Video Clip Lesson plan uses video clip to explore new galaxies Graphing Calculator App A phone app that handles formulae and graphs WEB WATCH Physics.org competition finds the best websites

  16. 75 FR 76079 - Sound Incentive Compensation Guidance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Sound Incentive Compensation Guidance... on the following information collection. Title of Proposal: Sound Incentive Compensation Guidance... Sound Compensation Practices adopted by the Financial Stability Board (FSB) in April 2009, as well as...

  17. Do top predators cue on sound production by mesopelagic prey?

    NASA Astrophysics Data System (ADS)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  18. EUVS Sounding Rocket Payload

    NASA Technical Reports Server (NTRS)

    Stern, Alan S.

    1996-01-01

    During the first half of this year (CY 1996), the EUVS project began preparations of the EUVS payload for the upcoming NASA sounding rocket flight 36.148CL, slated for launch on July 26, 1996 to observe and record a high-resolution (approx. 2 A FWHM) EUV spectrum of the planet Venus. These preparations were designed to improve the spectral resolution and sensitivity performance of the EUVS payload as well as prepare the payload for this upcoming mission. The following is a list of the EUVS project activities that have taken place since the beginning of this CY: (1) Applied a fresh, new SiC optical coating to our existing 2400 groove/mm grating to boost its reflectivity; (2) modified the Ranicon science detector to boost its detective quantum efficiency with the addition of a repeller grid; (3) constructed a new entrance slit plane to achieve 2 A FWHM spectral resolution; (4) prepared and held the Payload Initiation Conference (PIC) with the assigned NASA support team from Wallops Island for the upcoming 36.148CL flight (PIC held on March 8, 1996; see Attachment A); (5) began wavelength calibration activities of EUVS in the laboratory; (6) made arrangements for travel to WSMR to begin integration activities in preparation for the July 1996 launch; (7) paper detailing our previous EUVS Venus mission (NASA flight 36.117CL) published in Icarus (see Attachment B); and (8) continued data analysis of the previous EUVS mission 36.137CL (Spica occultation flight).

  19. Sound Clocks and Sonic Relativity

    NASA Astrophysics Data System (ADS)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  20. Dimensions of vehicle sounds perception.

    PubMed

    Wagner, Verena; Kallus, K Wolfgang; Foehl, Ulrich

    2017-10-01

    Vehicle sounds play an important role concerning customer satisfaction and can show another differentiating factor of brands. With an online survey of 1762 German and American customers, the requirement characteristics of high-quality vehicle sounds were determined. On the basis of these characteristics, a requirement profile was generated for every analyzed sound. These profiles were investigated in a second study with 78 customers using real vehicles. The assessment results of the vehicle sounds can be represented using the dimensions "timbre", "loudness", and "roughness/sharpness". The comparison of the requirement profiles and the assessment results show that the sounds which are perceived as pleasant and high-quality, more often correspond to the requirement profile. High-quality sounds are characterized by the fact that they are rather gentle, soft and reserved, rich, a bit dark and not too rough. For those sounds which are assessed worse by the customers, recommendations for improvements can be derived. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Pitch features of environmental sounds

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Kang, Jian

    2016-07-01

    A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.

  2. NASA Sounding Rocket Program educational outreach

    NASA Astrophysics Data System (ADS)

    Eberspeaker, P. J.

    2005-08-01

    Educational and public outreach is a major focus area for the National Aeronautics and Space Administration (NASA). The NASA Sounding Rocket Program (NSRP) shares in the belief that NASA plays a unique and vital role in inspiring future generations to pursue careers in science, mathematics, and technology. To fulfill this vision, the NASA Sounding Rocket Program engages in a host of student flight projects providing unique and exciting hands-on student space flight experiences. These projects include single stage Orion missions carrying "active" high school experiments and "passive" Explorer School modules, university level Orion and Terrier-Orion flights, and small hybrid rocket flights as part of the Small-scale Educational Rocketry Initiative (SERI) currently under development. Efforts also include educational programs conducted as part of major campaigns. The student flight projects are designed to reach students ranging from Kindergarteners to university undergraduates. The programs are also designed to accommodate student teams with varying levels of technical capabilities - from teams that can fabricate their own payloads to groups that are barely capable of drilling and tapping their own holes. The program also conducts a hands-on student flight project for blind students in collaboration with the National Federation of the Blind. The NASA Sounding Rocket Program is proud of its role in inspiring the "next generation of explorers" and is working to expand its reach to all regions of the United States and the international community as well.

  3. Cortical processing of dynamic sound envelope transitions.

    PubMed

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  4. Applications of cluster analysis to satellite soundings

    NASA Technical Reports Server (NTRS)

    Munteanu, M. J.; Jakubowicz, O.; Kalnay, E.; Piraino, P.

    1984-01-01

    The advantages of the use of cluster analysis in the improvement of satellite temperature retrievals were evaluated since the use of natural clusters, which are associated with atmospheric temperature soundings characteristic of different types of air masses, has the potential for improving stratified regression schemes in comparison with currently used methods which stratify soundings based on latitude, season, and land/ocean. The method of discriminatory analysis was used. The correct cluster of temperature profiles from satellite measurements was located in 85% of the cases. Considerable improvement was observed at all mandatory levels using regression retrievals derived in the clusters of temperature (weighted and nonweighted) in comparison with the control experiment and with the regression retrievals derived in the clusters of brightness temperatures of 3 MSU and 5 IR channels.

  5. Awareness Information with Speech and Sound

    NASA Astrophysics Data System (ADS)

    Kainulainen, Anssi; Turunen, Markku; Hakulinen, Jaakko

    In modern work environments, people have many tasks, collaborate with other people and use various equipment and services. Staying aware of other people, processes and situations in work environments is important. We naturally use our hearing to maintain this awareness; hearing other people talk let us know they are present, sounds of people walking, typing, etc. help us stay aware of overall situation almost without conscious effort. Such awareness can also be supported by technology; information can be presented with varying levels of subtlety ranging from loud warning signals to subtle cues, such as the sound of a hard drive indicating activity in a computer. Creating a computer system that supports our awareness of coworkers and overall situation in the workplace can increase our productivity and make the workplace a more social and enjoyable place.

  6. Auditory steady state response in sound field.

    PubMed

    Hernández-Pérez, H; Torres-Fortuny, A

    2013-02-01

    Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.

  7. Optimization of Sound Absorbers Number and Placement in an Enclosed Room by Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Lau, S. F.; Zainulabidin, M. H.; Yahya, M. N.; Zaman, I.; Azmir, N. A.; Madlan, M. A.; Ismon, M.; Kasron, M. Z.; Ismail, A. E.

    2017-10-01

    Giving a room proper acoustic treatment is both art and science. Acoustic design brings comfort in the built environment and reduces noise level by using sound absorbers. There is a need to give a room acoustic treatment by installing absorbers in order to decrease the reverberant sound. However, they are usually high in price which cost much for installation and there is no system to locate the optimum number and placement of sound absorbers. It would be a waste if the room is overly treated with absorbers or cause improper treatment if the room is treated with insufficient absorbers. This study aims to determine the amount of sound absorbers needed and optimum location of sound absorbers placement in order to reduce the overall sound pressure level in specified room by using ANSYS APDL software. The size of sound absorbers needed is found to be 11 m 2 by using Sabine equation and different unit sets of absorbers are applied on walls, each with the same total areas to investigate the best configurations. All three sets (single absorber, 11 absorbers and 44 absorbers) has successfully treating the room by reducing the overall sound pressure level. The greatest reduction in overall sound pressure level is that of 44 absorbers evenly distributed around the walls, which has reduced as much as 24.2 dB and the least effective configuration is single absorber whereby it has reduced the overall sound pressure level by 18.4 dB.

  8. Steerable sound transport in a 3D acoustic network

    NASA Astrophysics Data System (ADS)

    Xia, Bai-Zhan; Jiao, Jun-Rui; Dai, Hong-Qing; Yin, Sheng-Wen; Zheng, Sheng-Jie; Liu, Ting-Ting; Chen, Ning; Yu, De-Jie

    2017-10-01

    Quasi-lossless and asymmetric sound transports, which are exceedingly desirable in various modern physical systems, are almost always based on nonlinear or angular momentum biasing effects with extremely high power levels and complex modulation schemes. A practical route for the steerable sound transport along any arbitrary acoustic pathway, especially in a three-dimensional (3D) acoustic network, can revolutionize the sound power propagation and the sound communication. Here, we design an acoustic device containing a regular-tetrahedral cavity with four cylindrical waveguides. A smaller regular-tetrahedral solid in this cavity is eccentrically emplaced to break spatial symmetry of the acoustic device. The numerical and experimental results show that the sound power flow can unimpededly transport between two waveguides away from the eccentric solid within a wide frequency range. Based on the quasi-lossless and asymmetric transport characteristic of the single acoustic device, we construct a 3D acoustic network, in which the sound power flow can flexibly propagate along arbitrary sound pathways defined by our acoustic devices with eccentrically emplaced regular-tetrahedral solids.

  9. Sound representation in higher language areas during language generation

    PubMed Central

    Magrassi, Lorenzo; Aromataris, Giuseppe; Cabrini, Alessandro; Annovazzi-Lodi, Valerio; Moro, Andrea

    2015-01-01

    How language is encoded by neural activity in the higher-level language areas of humans is still largely unknown. We investigated whether the electrophysiological activity of Broca’s area correlates with the sound of the utterances produced. During speech perception, the electric cortical activity of the auditory areas correlates with the sound envelope of the utterances. In our experiment, we compared the electrocorticogram recorded during awake neurosurgical operations in Broca’s area and in the dominant temporal lobe with the sound envelope of single words versus sentences read aloud or mentally by the patients. Our results indicate that the electrocorticogram correlates with the sound envelope of the utterances, starting before any sound is produced and even in the absence of speech, when the patient is reading mentally. No correlations were found when the electrocorticogram was recorded in the superior parietal gyrus, an area not directly involved in language generation, or in Broca’s area when the participants were executing a repetitive motor task, which did not include any linguistic content, with their dominant hand. The distribution of suprathreshold correlations across frequencies of cortical activities varied whether the sound envelope derived from words or sentences. Our results suggest the activity of language areas is organized by sound when language is generated before any utterance is produced or heard. PMID:25624479

  10. Sound propagation in light-modulated carbon nanosponge suspensions

    NASA Astrophysics Data System (ADS)

    Zhou, W.; Tiwari, R. P.; Annamalai, R.; Sooryakumar, R.; Subramaniam, V.; Stroud, D.

    2009-03-01

    Single-walled carbon nanotube bundles dispersed in a highly polar fluid are found to agglomerate into a porous structure when exposed to low levels of laser radiation. The phototunable nanoscale porous structures provide an unusual way to control the acoustic properties of the suspension. Despite the high sound speed of the nanotubes, the measured speed of longitudinal-acoustic waves in the suspension decreases sharply with increasing bundle concentration. Two possible explanations for this reduction in sound speed are considered. One is simply that the sound speed decreases because of fluid heat induced by laser light absorption by the carbon nanotubes. The second is that this decrease results from the smaller sound velocity of fluid confined in a porous medium. Using a simplified description of convective heat transport, we estimate that the increase in temperature is too small to account for the observed decrease in sound velocity. To test the second possible explanation, we calculate the sound velocity in a porous medium, using a self-consistent effective-medium approximation. The results of this calculation agree qualitatively with experiment. In this case, the observed sound wave would be the analog of the slow compressional mode of porous solids at a structural length scale of order of 100 nm.

  11. Automatic adventitious respiratory sound analysis: A systematic review.

    PubMed

    Pramono, Renard Xaviero Adhi; Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11

  12. The isolation of low frequency impact sounds in hotel construction

    NASA Astrophysics Data System (ADS)

    LoVerde, John J.; Dong, David W.

    2002-11-01

    One of the design challenges in the acoustical design of hotels is reducing low frequency sounds from footfalls occurring on both carpeted and hard-surfaced floors. Research on low frequency impact noise [W. Blazier and R. DuPree, J. Acoust. Soc. Am. 96, 1521-1532 (1994)] resulted in a conclusion that in wood construction low frequency impact sounds were clearly audible and that feasible control methods were not available. The results of numerous FIIC (Field Impact Insulation Class) measurements performed in accordance with ASTM E1007 indicate the lack of correlation between FIIC ratings and the reaction of occupants in the room below. The measurements presented include FIIC ratings and sound pressure level measurements below the ASTM E1007 low frequency limit of 100 Hertz, and reveal that excessive sound levels in the frequency range of 63 to 100 Hertz correlate with occupant complaints. Based upon this history, a tentative criterion for maximum impact sound level in the low frequency range is presented. The results presented of modifying existing constructions to reduce the transmission of impact sounds at low frequencies indicate that there may be practical solutions to this longstanding problem.

  13. Designing sound and visual components for enhancement of urban soundscapes.

    PubMed

    Hong, Joo Young; Jeon, Jin Yong

    2013-09-01

    The aim of this study is to investigate the effect of audio-visual components on environmental quality to improve soundscape. Natural sounds with road traffic noise and visual components in urban streets were evaluated through laboratory experiments. Waterfall and stream water sounds, as well as bird sounds, were selected to enhance the soundscape. Sixteen photomontages of a streetscape were constructed in combination with two types of water features and three types of vegetation which were chosen as positive visual components. The experiments consisted of audio-only, visual-only, and audio-visual conditions. The preferences and environmental qualities of the stimuli were evaluated by a numerical scale and 12 pairs of adjectives, respectively. The results showed that bird sounds were the most preferred among the natural sounds, while the sound of falling water was found to degrade the soundscape quality when the road traffic noise level was high. The visual effects of vegetation on aesthetic preference were significant, but those of water features relatively small. It was revealed that the perceptual dimensions of the environment were different from the noise levels. Particularly, the acoustic comfort factor related to soundscape quality considerably influenced preference for the overall environment at a higher level of road traffic noise.

  14. Examining INM Accuracy Using Empirical Sound Monitoring and Radar Data

    NASA Technical Reports Server (NTRS)

    Miller, Nicholas P.; Anderson, Grant S.; Horonjeff, Richard D.; Kimura, Sebastian; Miller, Jonathan S.; Senzig, David A.; Thompson, Richard H.; Shepherd, Kevin P. (Technical Monitor)

    2000-01-01

    Aircraft noise measurements were made using noise monitoring systems at Denver International and Minneapolis St. Paul Airports. Measured sound exposure levels for a large number of operations of a wide range of aircraft types were compared with predictions using the FAA's Integrated Noise Model. In general it was observed that measured levels exceeded the predicted levels by a significant margin. These differences varied according to the type of aircraft and also depended on the distance from the aircraft. Many of the assumptions which affect the predicted sound levels were examined but none were able to fully explain the observed differences.

  15. Acoustoelasticity. [sound-structure interaction

    NASA Technical Reports Server (NTRS)

    Dowell, E. H.

    1977-01-01

    Sound or pressure variations inside bounded enclosures are investigated. Mathematical models are given for determining: (1) the interaction between the sound pressure field and the flexible wall of a Helmholtz resonator; (2) coupled fluid-structural motion of an acoustic cavity with a flexible and/or absorbing wall; (3) acoustic natural modes in multiple connected cavities; and (4) the forced response of a cavity with a flexible and/or absorbing wall. Numerical results are discussed.

  16. Recognition and characterization of unstructured environmental sounds

    NASA Astrophysics Data System (ADS)

    Chu, Selina

    2011-12-01

    exploit and label new unlabeled audio data. The final components of my thesis will involve investigating on learning sound structures for generalization and applying the proposed ideas to context aware applications. The inherent nature of environmental sound is noisy and contains relatively large amounts of overlapping events between different environments. Environmental sounds contain large variances even within a single environment type, and frequently, there are no divisible or clear boundaries between some types. Traditional methods of classification are generally not robust enough to handle classes with overlaps. This audio, hence, requires representation by complex models. Using deep learning architecture provides a way to obtain a generative model-based method for classification. Specifically, I considered the use of Deep Belief Networks (DBNs) to model environmental audio and investigate its applicability with noisy data to improve robustness and generalization. A framework was proposed using composite-DBNs to discover high-level representations and to learn a hierarchical structure for different acoustic environments in a data-driven fashion. Experimental results on real data sets demonstrate its effectiveness over traditional methods with over 90% accuracy on recognition for a high number of environmental sound types.

  17. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  18. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 8 2012-10-01 2012-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  19. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 8 2013-10-01 2013-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  20. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 8 2014-10-01 2014-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  1. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 8 2011-10-01 2011-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  2. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis

  3. Sand dollar: a weight belt for the juvenile.

    PubMed

    Chia, F S

    1973-07-06

    Juvenile sand dollars (Dendraster excentricus) selectively ingest heavy sand grains from the substrate and store them in an intestinal diverticulum which may function as a weight belt, assisting the young animal to remain in the shifting sandy environment. The sand disappears from the diverticulum when the animal reaches the length of 30 millimeters.

  4. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  5. Learning-Related Shifts in Generalization Gradients for Complex Sounds

    PubMed Central

    Wisniewski, Matthew G.; Church, Barbara A.; Mercado, Eduardo

    2010-01-01

    Learning to discriminate stimuli can alter how one distinguishes related stimuli. For instance, training an individual to differentiate between two stimuli along a single dimension can alter how that individual generalizes learned responses. In this study, we examined the persistence of shifts in generalization gradients after training with sounds. University students were trained to differentiate two sounds that varied along a complex acoustic dimension. Students subsequently were tested on their ability to recognize a sound they experienced during training when it was presented among several novel sounds varying along this same dimension. Peak shift was observed in Experiment 1 when generalization tests immediately followed training, and in Experiment 2 when tests were delayed by 24 hours. These findings further support the universality of generalization processes across species, modalities, and levels of stimulus complexity. They also raise new questions about the mechanisms underlying learning-related shifts in generalization gradients. PMID:19815929

  6. Sound absorption and morphology characteristic of porous concrete paving blocks

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Nor, H. Md; Ramadhansyah, P. J.; Mohamed, A.; Hassan, N. Abdul; Ibrahim, M. H. Wan; Ramli, N. I.; Nazri, F. Mohamed

    2017-11-01

    In this study, sound absorption and morphology characteristic of Porous Concrete Paving Blocks (PCPB) at different sizes of coarse aggregate were presented. Three different sizes of coarse aggregate were used; passing 10 mm retained 5 mm (as Control), passing 8 mm retained 5 mm (8 - 5) and passing 10 mm retained 8 mm (10 - 8). The sound absorption test was conducted through the impedance tube at different frequency. It was found that the size of coarse aggregate affects the level of absorption of the specimens. It also shows that PCPB 10 - 8 resulted in high sound absorption compared to the other blocks. On the other hand, microstructure morphology of PCPB shows a clearer version of existing micro-cracks and voids inside the specimens which affecting the results of sound absorption.

  7. A spatially collocated sound thrusts a flash into awareness

    PubMed Central

    Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta

    2015-01-01

    To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126

  8. Towards parameter-free classification of sound effects in movies

    NASA Astrophysics Data System (ADS)

    Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.

    2005-08-01

    The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.

  9. Optimum employment of satellite indirect soundings as numerical model input

    NASA Technical Reports Server (NTRS)

    Horn, L. H.; Derber, J. C.; Koehler, T. L.; Schmidt, B. D.

    1981-01-01

    The characteristics of satellite-derived temperature soundings that would significantly affect their use as input for numerical weather prediction models were examined. Independent evaluations of satellite soundings were emphasized to better define error characteristics. Results of a Nimbus-6 sounding study reveal an underestimation of the strength of synoptic scale troughs and ridges, and associated gradients in isobaric height and temperature fields. The most significant errors occurred near the Earth's surface and the tropopause. Soundings from the TIROS-N and NOAA-6 satellites were also evaluated. Results again showed an underestimation of upper level trough amplitudes leading to weaker thermal gradient depictions in satellite-only fields. These errors show a definite correlation to the synoptic flow patterns. In a satellite-only analysis used to initialize a numerical model forecast, it was found that these synoptically correlated errors were retained in the forecast sequence.

  10. Aerodynamic sound generation of flapping wing.

    PubMed

    Bae, Youngmin; Moon, Young J

    2008-07-01

    The unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for a two-dimensional model of Bombus terrestris bumblebee at hovering and forward flight conditions. The Reynolds number Re, based on the maximum translational velocity of the wing and the chord length, is 8800 and the Mach number M is 0.0485. The computational results show that the flapping wing sound is generated by two different sound generation mechanisms. A primary dipole tone is generated at wing beat frequency by the transverse motion of the wing, while other higher frequency dipole tones are produced via vortex edge scattering during a tangential motion. It is also found that the primary tone is directional because of the torsional angle in wing motion. These features are only distinct for hovering, while in forward flight condition, the wing-vortex interaction becomes more prominent due to the free stream effect. Thereby, the sound pressure level spectrum is more broadband at higher frequencies and the frequency compositions become similar in all directions.

  11. Nonlinear acoustics in cicada mating calls enhance sound propagation.

    PubMed

    Hughes, Derke R; Nuttall, Albert H; Katz, Richard A; Carter, G Clifford

    2009-02-01

    An analysis of cicada mating calls, measured in field experiments, indicates that the very high levels of acoustic energy radiated by this relatively small insect are mainly attributed to the nonlinear characteristics of the signal. The cicada emits one of the loudest sounds in all of the insect population with a sound production system occupying a physical space typically less than 3 cc. The sounds made by tymbals are amplified by the hollow abdomen, functioning as a tuned resonator, but models of the signal based solely on linear techniques do not fully account for a sound radiation capability that is so disproportionate to the insect's size. The nonlinear behavior of the cicada signal is demonstrated by combining the mutual information and surrogate data techniques; the results obtained indicate decorrelation when the phase-randomized and non-phase-randomized data separate. The Volterra expansion technique is used to fit the nonlinearity in the insect's call. The second-order Volterra estimate provides further evidence that the cicada mating calls are dominated by nonlinear characteristics and also suggests that the medium contributes to the cicada's efficient sound propagation. Application of the same principles has the potential to improve radiated sound levels for sonar applications.

  12. Effect of the spectrum of a high-intensity sound source on the sound-absorbing properties of a resonance-type acoustic lining

    NASA Astrophysics Data System (ADS)

    Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.

    2012-07-01

    Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.

  13. Sounds of a Star

    NASA Astrophysics Data System (ADS)

    2001-06-01

    Acoustic Oscillations in Solar-Twin "Alpha Cen A" Observed from La Silla by Swiss Team Summary Sound waves running through a star can help astronomers reveal its inner properties. This particular branch of modern astrophysics is known as "asteroseismology" . In the case of our Sun, the brightest star in the sky, such waves have been observed since some time, and have greatly improved our knowledge about what is going on inside. However, because they are much fainter, it has turned out to be very difficult to detect similar waves in other stars. Nevertheless, tiny oscillations in a solar-twin star have now been unambiguously detected by Swiss astronomers François Bouchy and Fabien Carrier from the Geneva Observatory, using the CORALIE spectrometer on the Swiss 1.2-m Leonard Euler telescope at the ESO La Silla Observatory. This telescope is mostly used for discovering exoplanets (see ESO PR 07/01 ). The star Alpha Centauri A is the nearest star visible to the naked eye, at a distance of a little more than 4 light-years. The new measurements show that it pulsates with a 7-minute cycle, very similar to what is observed in the Sun . Asteroseismology for Sun-like stars is likely to become an important probe of stellar theory in the near future. The state-of-the-art HARPS spectrograph , to be mounted on the ESO 3.6-m telescope at La Silla, will be able to search for oscillations in stars that are 100 times fainter than those for which such demanding observations are possible with CORALIE. PR Photo 23a/01 : Oscillations in a solar-like star (schematic picture). PR Photo 23b/01 : Acoustic spectrum of Alpha Centauri A , as observed with CORALIE. Asteroseismology: listening to the stars ESO PR Photo 23a/01 ESO PR Photo 23a/01 [Preview - JPEG: 357 x 400 pix - 96k] [Normal - JPEG: 713 x 800 pix - 256k] [HiRes - JPEG: 2673 x 3000 pix - 2.1Mb Caption : PR Photo 23a/01 is a graphical representation of resonating acoustic waves in the interior of a solar-like star. Red and blue

  14. Behavioral responses of a harbor porpoise (Phocoena phocoena) to playbacks of broadband pile driving sounds.

    PubMed

    Kastelein, Ronald A; van Heerden, Dorianne; Gransier, Robin; Hoek, Lean

    2013-12-01

    The high under-water sound pressure levels (SPLs) produced during pile driving to build offshore wind turbines may affect harbor porpoises. To estimate the discomfort threshold of pile driving sounds, a porpoise in a quiet pool was exposed to playbacks (46 strikes/min) at five SPLs (6 dB steps: 130-154 dB re 1 μPa). The spectrum of the impulsive sound resembled the spectrum of pile driving sound at tens of kilometers from the pile driving location in shallow water such as that found in the North Sea. The animal's behavior during test and baseline periods was compared. At and above a received broadband SPL of 136 dB re 1 μPa [zero-peak sound pressure level: 151 dB re 1 μPa; t90: 126 ms; sound exposure level of a single strike (SELss): 127 dB re 1 μPa(2) s] the porpoise's respiration rate increased in response to the pile driving sounds. At higher levels, he also jumped out of the water more often. Wild porpoises are expected to move tens of kilometers away from offshore pile driving locations; response distances will vary with context, the sounds' source level, parameters influencing sound propagation, and background noise levels. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Sounds of silence: How to animate virtual worlds with sound

    NASA Technical Reports Server (NTRS)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  16. Marine Forage Fishes in Puget Sound

    DTIC Science & Technology

    2007-03-01

    Orcas in Puget Sound . Puget Sound Near- shore Partnership Report No. 2007-01. Published by Seattle District, U.S. Army Corps of Engineers, Seattle...Technical Report 2007-03 Marine Forage Fishes in Puget Sound Prepared in support of the Puget Sound Nearshore Partnership Dan Penttila Washington...Forage Fishes in Puget Sound Valued Ecosystem Components Report Series Front cover: Pacific herring (courtesy of Washington Sea Grant). Back cover

  17. The hearing threshold of a harbor porpoise (Phocoena phocoena) for impulsive sounds (L).

    PubMed

    Kastelein, Ronald A; Gransier, Robin; Hoek, Lean; de Jong, Christ A F

    2012-08-01

    The distance at which harbor porpoises can hear underwater detonation sounds is unknown, but depends, among other factors, on the hearing threshold of the species for impulsive sounds. Therefore, the underwater hearing threshold of a young harbor porpoise for an impulsive sound, designed to mimic a detonation pulse, was quantified by using a psychophysical technique. The synthetic exponential pulse with a 5 ms time constant was produced and transmitted by an underwater projector in a pool. The resulting underwater sound, though modified by the response of the projection system and by the pool, exhibited the characteristic features of detonation sounds: A zero to peak sound pressure level of at least 30 dB (re 1 s(-1)) higher than the sound exposure level, and a short duration (34 ms). The animal's 50% detection threshold for this impulsive sound occurred at a received unweighted broadband sound exposure level of 60 dB re 1 μPa(2)s. It is shown that the porpoise's audiogram for short-duration tonal signals [Kastelein et al., J. Acoust. Soc. Am. 128, 3211-3222 (2010)] can be used to estimate its hearing threshold for impulsive sounds.

  18. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  19. Sounding the field: recent works in sound studies.

    PubMed

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  20. Encoding of sound envelope transients in the auditory cortex of juvenile rats and adult rats.

    PubMed

    Lu, Qi; Jiang, Cuiping; Zhang, Jiping

    2016-02-01

    Accurate neural processing of time-varying sound amplitude and spectral information is vital for species-specific communication. During postnatal development, cortical processing of sound frequency undergoes progressive refinement; however, it is not clear whether cortical processing of sound envelope transients also undergoes age-related changes. We determined the dependence of neural response strength and first-spike latency on sound rise-fall time across sound levels in the primary auditory cortex (A1) of juvenile (P20-P30) rats and adult (8-10 weeks) rats. A1 neurons were categorized as "all-pass", "short-pass", or "mixed" ("all-pass" at high sound levels to "short-pass" at lower sound levels) based on the normalized response strength vs. rise-fall time functions across sound levels. The proportions of A1 neurons within each of the three categories in juvenile rats were similar to that in adult rats. In general, with increasing rise-fall time, the average response strength decreased and the average first-spike latency increased in A1 neurons of both groups. At a given sound level and rise-fall time, the average normalized neural response strength did not differ significantly between the two age groups. However, the A1 neurons in juvenile rats showed greater absolute response strength, longer first-spike latency compared to those in adult rats. In addition, at a constant sound level, the average first-spike latency of juvenile A1 neurons was more sensitive to changes in rise-fall time. Our results demonstrate the dependence of the responses of rat A1 neurons on sound rise-fall time, and suggest that the response latency exhibit some age-related changes in cortical representation of sound envelope rise time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Sound isolation performance of interior acoustical sash

    NASA Astrophysics Data System (ADS)

    Tocci, Gregory

    2002-05-01

    In existing, as well as new buildings, an interior light of glass mounted on the inside of a prime window is used to improve the sound transmission loss otherwise obtained by the prime window alone. Interior acoustical sash is most often 1/4 in. (6 mm) monolithic or laminated glass, and is typically spaced 3 in. to 6 in. from the glass of the prime window. This paper presents TL data measured at Riverbank Acoustical Laboratories by Solutia (formerly Monsanto) for lightweight prime windows of various types, with and without interior acoustical sash glazed with 1/4 in. laminated glass. The TL data are used to estimate the A-weighted insertion loss of interior acoustical sash when applied to prime windows glazed with lightweight glass for four transportation noise source types-highway traffic, aircraft, electric rail, and diesel rail. The analysis also has been extended to determine the insertion loss expressed as a change in OITC. The data also exhibit the reductions in insertion loss that can result from short-circuiting the interior acoustical sash with the prime window. [Work supported by Solutia, Inc.

  2. Scattering of sound by atmospheric turbulence predictions in a refractive shadow zone

    NASA Technical Reports Server (NTRS)

    Mcbride, Walton E.; Bass, Henry E.; Raspet, Richard; Gilbert, Kenneth E.

    1990-01-01

    According to ray theory, regions exist in an upward refracting atmosphere where no sound should be present. Experiments show, however, that appreciable sound levels penetrate these so-called shadow zones. Two mechanisms contribute to sound in the shadow zone: diffraction and turbulent scattering of sound. Diffractive effects can be pronounced at lower frequencies but are small at high frequencies. In the short wavelength limit, then, scattering due to turbulence should be the predominant mechanism involved in producing the sound levels measured in shadow zones. No existing analytical method includes turbulence effects in the prediction of sound pressure levels in upward refractive shadow zones. In order to obtain quantitative average sound pressure level predictions, a numerical simulation of the effect of atmospheric turbulence on sound propagation is performed. The simulation is based on scattering from randomly distributed scattering centers ('turbules'). Sound pressure levels are computed for many realizations of a turbulent atmosphere. Predictions from the numerical simulation are compared with existing theories and experimental data.

  3. Sounds Like a Winner.

    ERIC Educational Resources Information Center

    Rittner-Heir, Robbin M.

    2001-01-01

    Explains how the Ocoee Middle School (Orlando, Florida) improved the ability of students to hear in their classrooms and gained improvements in their attention levels and their conduct. Specific design concepts that make Ocoee Middle School the SMART school of the future while also controlling design and construction costs are examined. (GR)

  4. A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene

    PubMed Central

    Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling

    2016-01-01

    A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239

  5. The influence of company identity on the perception of vehicle sounds.

    PubMed

    Humphreys, Louise; Giudice, Sebastiano; Jennings, Paul; Cain, Rebecca; Song, Wookeun; Dunne, Garry

    2011-04-01

    In order to determine how the interior of a car should sound, automotive manufacturers often rely on obtaining data from individual evaluations of vehicle sounds. Company identity could play a role in these appraisals, particularly when individuals are comparing cars from opposite ends of the performance spectrum. This research addressed the question: does company identity influence the evaluation of automotive sounds belonging to cars of a similar performance level and from the same market segment? Participants listened to car sounds from two competing manufacturers, together with control sounds. Before listening to each sound, participants were presented with the correct company identity for that sound, the incorrect identity or were given no information about the identity of the sound. The results showed that company identity did not influence appraisals of high performance cars belonging to different manufacturers. These results have positive implications for methodologies employed to capture the perceptions of individuals. STATEMENT OF RELEVANCE: A challenge in automotive design is to set appropriate targets for vehicle sounds, relying on understanding subjective reactions of individuals to such sounds. This paper assesses the role of company identity in influencing these subjective reactions and will guide sound evaluation studies, in which the manufacturer is often apparent.

  6. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  7. Sound localization in the alligator.

    PubMed

    Bierman, Hilary S; Carr, Catherine E

    2015-11-01

    In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. The prediction of en route noise levels for a DC-9 aircraft

    NASA Technical Reports Server (NTRS)

    Weir, Donald S.

    1988-01-01

    En route noise for advanced propfan powered aircraft has become an issue of concern for the Federal Aviation Administration. The NASA Aircraft Noise Prediction Program (ANOPP) is used to demonstrate the source noise and propagation effects for an aircraft in level flight up to 35,000 feet altitude. One-third octave band spectra of the source noise, atmospheric absorption loss, and received noise are presented. The predicted maximum A-weighted sound pressure level is compared to measured data from the Aeronautical Research Institute of Sweden. ANOPP is shown to be an effective tool in evaluating the en route noise characteristics of a DC-9 aircraft.

  9. An exploratory survey of noise levels associated with a 100kW wind turbine

    NASA Technical Reports Server (NTRS)

    Balombin, J. R.

    1980-01-01

    Noise measurements of a 125-foot diameter, 100 kW wind turbine are presented. The data include measurements as functions of distance from the turbine and directivity angle and cover a frequency range from 1 Hz to several kHz. Potential community impact is discussed in terms of A-weighted noise levels relative to background levels, and the intrasonic spectral content. Finally, the change in the sound power spectrum associated with a change in the rotor speed in described. The acoustic impact of this size wind turbine is judged to be minimal.

  10. 40 CFR 205.54-2 - Sound data acquisition system.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of 86 dB (rms) and the level indicated for an octave band of random noise of equal energy as the... Publication 179, Precision Sound Level Meters. (v) Magnetic tape recorders. No requirements are described in...) Calibrate tape recorders using the brand and type of magnetic tape used for actual data acquisition...

  11. 46 CFR 56.50-90 - Sounding devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... fuel-oil tank may terminate in any space where the risk of ignition of spillage from the pipe might... following requirements are met: (1) In addition to the sounding pipe, the fuel-oil tank has an oil-level... of oil-level gauges with flat glasses and self-closing valves between the gauges and fuel tanks is...

  12. Sound recordings of road maintenance equipment on the Lincoln National Forest, New Mexico

    Treesearch

    D. K. Delaney; T. G. Grubb

    2004-01-01

    The purpose of this pilot study was to record, characterize, and quantify road maintenance activity in Mexican spotted owl (Strix occidentalis lucida) habitat to gauge potential sound level exposure for owls during road maintenance activities. We measured sound levels from three different types of road maintenance equipment (rock crusherlloader,...

  13. Relations among pure-tone sound stimuli, neural activity, and the loudness sensation

    NASA Technical Reports Server (NTRS)

    Howes, W. L.

    1972-01-01

    Both the physiological and psychological responses to pure-tone sound stimuli are used to derive formulas which: (1) relate the loudness, loudness level, and sound-pressure level of pure tones; (2) apply continuously over most of the acoustic regime, including the loudness threshold; and (3) contain no undetermined coefficients. Some of the formulas are fundamental for calculating the loudness of any sound. Power-law formulas relating the pure-tone sound stimulus, neural activity, and loudness are derived from published data.

  14. Making sound vortices by metasurfaces

    SciTech Connect

    Ye, Liping; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang

    Based on the Huygens-Fresnel principle, a metasurface structure is designed to generate a sound vortex beam in airborne environment. The metasurface is constructed by a thin planar plate perforated with a circular array of deep subwavelength resonators with desired phase and amplitude responses. The metasurface approach in making sound vortices is validated well by full-wave simulations and experimental measurements. Potential applications of such artificial spiral beams can be anticipated, as exemplified experimentally by the torque effect exerting on an absorbing disk.

  15. Making sound vortices by metasurfaces

    NASA Astrophysics Data System (ADS)

    Ye, Liping; Qiu, Chunyin; Lu, Jiuyang; Tang, Kun; Jia, Han; Ke, Manzhu; Peng, Shasha; Liu, Zhengyou

    2016-08-01

    Based on the Huygens-Fresnel principle, a metasurface structure is designed to generate a sound vortex beam in airborne environment. The metasurface is constructed by a thin planar plate perforated with a circular array of deep subwavelength resonators with desired phase and amplitude responses. The metasurface approach in making sound vortices is validated well by full-wave simulations and experimental measurements. Potential applications of such artificial spiral beams can be anticipated, as exemplified experimentally by the torque effect exerting on an absorbing disk.

  16. The Multisensory Sound Lab: Sounds You Can See and Feel.

    ERIC Educational Resources Information Center

    Lederman, Norman; Hendricks, Paula

    1994-01-01

    A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…

  17. Attentional Capacity Limits Gap Detection during Concurrent Sound Segregation.

    PubMed

    Leung, Ada W S; Jolicoeur, Pierre; Alain, Claude

    2015-11-01

    Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.

  18. Human brain regions involved in recognizing environmental sounds.

    PubMed

    Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A

    2004-09-01

    To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.

  19. Selecting a pharmacy layout design using a weighted scoring system.

    PubMed

    McDowell, Alissa L; Huang, Yu-Li

    2012-05-01

    A weighted scoring system was used to select a pharmacy layout redesign. Facilities layout design techniques were applied at a local hospital pharmacy using a step-by-step design process. The process involved observing and analyzing the current situation, observing the current available space, completing activity flow charts of the pharmacy processes, completing communication and material relationship charts to detail which areas in the pharmacy were related to one another and how they were related, researching applications in other pharmacies or in scholarly works that could be beneficial, numerically defining space requirements for areas within the pharmacy, measuring the available space within the pharmacy, developing a set of preliminary designs, and modifying preliminary designs so they were all acceptable to the pharmacy staff. To select a final layout that could be implemented in the pharmacy, those layouts were compared via a weighted scoring system. The weighted aspect further allowed additional emphasis on categories based on their effect on pharmacy performance. The results produced a beneficial layout design as determined through simulated models of the pharmacy operation that more effectively allocated and strategically located space to improve transportation distances and materials handling, employee utilization, and ergonomics. Facilities layout designs for a hospital pharmacy were evaluated using a weighted scoring system to identify a design that was superior to both the current layout and alternative layouts in terms of feasibility, cost, patient safety, employee safety, flexibility, robustness, transportation distance, employee utilization, objective adherence, maintainability, usability, and environmental impact.

  20. What is the link between synaesthesia and sound symbolism?

    PubMed Central

    Bankieris, Kaitlyn; Simner, Julia

    2015-01-01

    Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia. PMID:25498744

  1. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation.

    PubMed

    Kumeta, Masahiro; Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound.

  2. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation

    PubMed Central

    Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H.

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound. PMID:29385174

  3. Loudness-dependent behavioral responses and habituation to sound by the longfin squid (Doryteuthis pealeii).

    PubMed

    Mooney, T Aran; Samson, Julia E; Schlunk, Andrea D; Zacarias, Samantha

    2016-07-01

    Sound is an abundant cue in the marine environment, yet we know little regarding the frequency range and levels which induce behavioral responses in ecologically key marine invertebrates. Here we address the range of sounds that elicit unconditioned behavioral responses in squid Doryteuthis pealeii, the types of responses generated, and how responses change over multiple sound exposures. A variety of response types were evoked, from inking and jetting to body pattern changes and fin movements. Squid responded to sounds from 80 to 1000 Hz, with response rates diminishing at the higher and lower ends of this frequency range. Animals responded to the lowest sound levels in the 200-400 Hz range. Inking, an escape response, was confined to the lower frequencies and highest sound levels; jetting was more widespread. Response latencies were variable but typically occurred after 0.36 s (mean) for jetting and 0.14 s for body pattern changes; pattern changes occurred significantly faster. These results demonstrate that squid can exhibit a range of behavioral responses to sound include fleeing, deimatic and protean behaviors, all of which are associated with predator evasion. Response types were frequency and sound level dependent, reflecting a relative loudness concept to sound perception in squid.

  4. Vocal Imitations of Non-Vocal Sounds

    PubMed Central

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  5. Seismic and Biological Sources of Ambient Ocean Sound

    NASA Astrophysics Data System (ADS)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  6. Divergent Human Cortical Regions for Processing Distinct Acoustic-Semantic Categories of Natural Sounds: Animal Action Sounds vs. Vocalizations

    PubMed Central

    Webster, Paula J.; Skipper-Kallal, Laura M.; Frum, Chris A.; Still, Hayley N.; Ward, B. Douglas; Lewis, James W.

    2017-01-01

    A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds. PMID:28111538

  7. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    NASA Astrophysics Data System (ADS)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  8. The Sound Broadcasting System of the Bullfrog

    NASA Astrophysics Data System (ADS)

    Purgue, Alejandro P.

    1995-01-01

    This work presents a comparison across selected species of several aspects of the mechanism of sound broadcasting in anuran amphibians. These studies indicate that all anuran species studied to date broadcast their calls through structures that resonate at the dominant frequency in their calls. Measurements of the magnitude of the transfer function of the radiating structures show that the structures responsible for radiating the bulk of the energy present in the call vary depending on the species considered. Bullfrogs (Rana catesbeiana) radiate most of the energy (89% sound level) present in their calls through their eardrums. In this species the transfer function of the eardrum displays several peaks coincident in frequency and amplitude with the energy distribution observed in the mating and release call of the species. The vocal sac and gular area contribute energy only in the lower band (150 to 400 Hz) of the call. The ears are responsible for radiating additional frequency bands to the ones being radiated through the gular area and vocal sacs. This condition appears to be derived. In Rana pipiens the ears also broadcast a significant portion of the energy present in the call (63% sound level) but the frequencies of the aural emissions are a subset of those frequencies radiated through the vocal sac and gular area. Character optimization suggests that this is the primitive condition for ranid frogs. Finally, the barking treefrog (Hyla gratiosa) appears to use two different structures to radiate different portions of the call. The low frequency band appears to be preferentially radiated through the lungs while the high frequency components of the call are radiated through the vocal sac.

  9. Sound Naming in Neurodegenerative Disease

    ERIC Educational Resources Information Center

    Chow, Maggie L.; Brambati, Simona M.; Gorno-Tempini, Maria Luisa; Miller, Bruce L.; Johnson, Julene K.

    2010-01-01

    Modern cognitive neuroscientific theories and empirical evidence suggest that brain structures involved in movement may be related to action-related semantic knowledge. To test this hypothesis, we examined the naming of environmental sounds in patients with corticobasal degeneration (CBD) and progressive supranuclear palsy (PSP), two…

  10. Sound, Noise, and Vibration Control.

    ERIC Educational Resources Information Center

    Yerges, Lyle F.

    This working guide on the principles and techniques of controlling acoustical environment is discussed in the light of human, environmental and building needs. The nature of sound and its variables are defined. The acoustical environment and its many materials, spaces and functional requirements are described, with specific methods for planning,…

  11. Sound Assessment through Proper Policy

    ERIC Educational Resources Information Center

    Chappuis, Stephen J.

    2007-01-01

    Aligning a school board policy manual with the faculty handbook would be an excellent application of systems thinking in support of school district mission and goals. This article talks about changing sound assessment practice in accordance with the school's proper policy. One obstacle to changing assessment practice is the prevailing belief that…

  12. Sound control by temperature gradients

    NASA Astrophysics Data System (ADS)

    Sánchez-Dehesa, José; Angelov, Mitko I.; Cervera, Francisco; Cai, Liang-Wu

    2009-11-01

    This work reports experiments showing that airborne sound propagation can be controlled by temperature gradients. A system of two heated tubes is here used to demonstrate the collimation and focusing of an ultrasonic beam by the refractive index profile created by the temperature gradients existing around the tubes. Numerical simulations supporting the experimental findings are also reported.

  13. Demonstrating Sound Impulses in Pipes.

    ERIC Educational Resources Information Center

    Raymer, M. G.; Micklavzina, Stan

    1995-01-01

    Describes a simple, direct method to demonstrate the effects of the boundary conditions on sound impulse reflections in pipes. A graphical display of the results can be made using a pipe, cork, small hammer, microphone, and fast recording electronics. Explains the principles involved. (LZ)

  14. Rocket ozone sounding network data

    NASA Technical Reports Server (NTRS)

    Wright, D. U.; Krueger, A. J.; Foster, G. M.

    1978-01-01

    During the period December 1976 through February 1977, three regular monthly ozone profiles were measured at Wallops Flight Center, two special soundings were taken at Antigua, West Indies, and at the Churchill Research Range, monthly activities were initiated to establish stratospheric ozone climatology. This report presents the data results and flight profiles for the period covered.

  15. Optical Measurement Of Sound Pressure

    NASA Technical Reports Server (NTRS)

    Trinh, Eugene H.; Gaspar, Mark; Leung, Emily W.

    1989-01-01

    Noninvasive technique does not disturb field it measures. Sound field deflects laser beam proportionally to its amplitude. Knife edge intercepts undeflected beam, allowing only deflected beam to reach photodetector. Apparatus calibrated by comparing output of photodetector with that of microphone. Optical technique valuable where necessary to measure in remote, inaccessible, or hostile environment or to avoid perturbation of measured region.

  16. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  17. Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section

    NASA Technical Reports Server (NTRS)

    Brooks, T. F.; Scheiman, J.; Silcox, R. J.

    1976-01-01

    Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.

  18. What makes for sound science?

    PubMed

    Costa, Fabrizio; Cramer, Grant; Finnegan, E Jean

    2017-11-10

    The inclusive threshold policy for publication in BMC journals including BMC Plant Biology means that editorial decisions are largely based on the soundness of the research presented rather than the novelty or potential impact of the work. Here we discuss what is required to ensure that research meets the requirement of scientific soundness. BMC Plant Biology and the other BCM-series journals ( https://www.biomedcentral.com/p/the-bmc-series-journals ) differ in policy from many other journals as they aim to provide a home for all publishable research. The inclusive threshold policy for publication means that editorial decisions are largely based on the soundness of the research presented rather than the novelty or potential impact of the work. The emphasis on scientific soundness ( http://blogs.biomedcentral.com/bmcseriesblog/2016/12/05/vital-importance-inclusive/ ) rather than novelty or impact is important because it means that manuscripts that may be judged to be of low impact due to the nature of the study as well as those reporting negative results or that largely replicate earlier studies, all of which can be difficult to publish elsewhere, are available to the research community. Here we discuss the importance of the soundness of research and provide some basic guidelines to assist authors to determine whether their research is appropriate for submission to BMC Plant Biology.Prior to a research article being sent out for review, the handling editor will first determine whether the research presented is scientifically valid. To be valid the research must address a question of biological significance using suitable methods and analyses, and must follow community-agreed standards relevant to the research field.

  19. Geometric Constraints on Human Speech Sound Inventories

    PubMed Central

    Dunbar, Ewan; Dupoux, Emmanuel

    2016-01-01

    We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296

  20. Threshold for Onset of Injury in Chinook Salmon from Exposure to Impulsive Pile Driving Sounds

    PubMed Central

    Halvorsen, Michele B.; Casper, Brandon M.; Woodley, Christa M.; Carlson, Thomas J.; Popper, Arthur N.

    2012-01-01

    The risk of effects to fishes and other aquatic life from impulsive sound produced by activities such as pile driving and seismic exploration is increasing throughout the world, particularly with the increased exploitation of oceans for energy production. At the same time, there are few data that provide insight into the effects of these sounds on fishes. The goal of this study was to provide quantitative data to define the levels of impulsive sound that could result in the onset of barotrauma to fish. A High Intensity Controlled Impedance Fluid filled wave Tube was developed that enabled laboratory simulation of high-energy impulsive sound that were characteristic of aquatic far-field, plane-wave acoustic conditions. The sounds used were based upon the impulsive sounds generated by an impact hammer striking a steel shell pile. Neutrally buoyant juvenile Chinook salmon (Oncorhynchus tshawytscha) were exposed to impulsive sounds and subsequently evaluated for barotrauma injuries. Observed injuries ranged from mild hematomas at the lowest sound exposure levels to organ hemorrhage at the highest sound exposure levels. Frequency of observed injuries were used to compute a biological response weighted index (RWI) to evaluate the physiological impact of injuries at the different exposure levels. As single strike and cumulative sound exposure levels (SELss, SELcum respectively) increased, RWI values increased. Based on the results, tissue damage associated with adverse physiological costs occurred when the RWI was greater than 2. In terms of sound exposure levels a RWI of 2 was achieved for 1920 strikes by 177 dB re 1 µPa2⋅s SELss yielding a SELcum of 210 dB re 1 µPa2⋅s, and for 960 strikes by 180 dB re 1 µPa2⋅s SELss yielding a SELcum of 210 dB re 1 µPa2⋅s. These metrics define thresholds for onset of injury in juvenile Chinook salmon. PMID:22745695

  1. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.

  2. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Since regular operation of the DOE/NASA MOD-1 wind turbine began in October 1979 about 10 nearby households have complained of noise from the machine. Development of the NASA-LeRC with turbine sound prediction code began in May 1980 as part of an effort to understand and reduce the noise generated by MOD-1. Tone sound levels predicted with this code are in generally good agreement with measured data taken in the vicinity MOD-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may be amplifying the actual sound levels by about 6 dB. Parametric analysis using the code has shown that the predominant contributions to MOD-1 rotor noise are: (1) the velocity deficit in the wake of the support tower; (2) the high rotor speed; and (3) off column operation.

  3. Annoyance resulting from intrusion of aircraft sounds upon various activities

    NASA Technical Reports Server (NTRS)

    Gunn, W. J.; Shepherd, W. T.; Fletcher, J. L.

    1975-01-01

    An experiment was conducted in which subjects were engaged in TV viewing, telephone listening, or reverie (no activity) for a 1/2-hour session. During the session, they were exposed to a series of recorded aircraft sounds at the rate of one flight every 2 minutes. Within each session, four levels of flyover noise, separated by dB increments, were presented several times in a Latin Square balanced sequence. The peak level of the noisiest flyover in any session was fixed at 95, 90, 85, 75, or 70 dBA. At the end of the test session, subjects recorded their responses to the aircraft sounds, using a bipolar scale which covered the range from very pleasant to extremely annoying. Responses to aircraft noises were found to be significantly affected by the particular activity in which the subjects were engaged. Not all subjects found the aircraft sounds to be annoying.

  4. Using the sound of nuclear energy

    DOE PAGES

    Garrett, Steven; Smith, James; Smith, Robert; ...

    2016-08-01

    The generation of sound by heat has been documented as an “acoustical curiosity” since a Buddhist monk reported the loud tone generated by a ceremonial rice-cooker in his diary, in 1568. Over the last four decades, significant progress has been made in understanding “thermoacoustic processes,” enabling the design of thermoacoustic engines and refrigerators. Motivated by the Fukushima nuclear reactor disaster, we have developed and tested a thermoacoustic engine that exploits the energy-rich conditions in the core of a nuclear reactor to provide core condition information to the operators without a need for external electrical power. The heat engine is self-poweredmore » and can wirelessly transmit the temperature and reactor power level by generation of a pure tone which can be detected outside the reactor. We report here the first use of a fission-powered thermoacoustic engine capable of serving as a performance and safety sensor in the core of a research reactor and present data from the hydrophones in the coolant (far from the core) and an accelerometer attached to a structure outside the reactor. These measurements confirmed that the frequency of the sound produced indicates the reactor’s coolant temperature and that the amplitude (above an onset threshold) is related to the reactor’s operating power level. Furthermore, these signals can be detected even in the presence of substantial background noise generated by the reactor’s fluid pumps.« less

  5. Using the sound of nuclear energy

    SciTech Connect

    Garrett, Steven; Smith, James; Smith, Robert

    The generation of sound by heat has been documented as an “acoustical curiosity” since a Buddhist monk reported the loud tone generated by a ceremonial rice-cooker in his diary, in 1568. Over the last four decades, significant progress has been made in understanding “thermoacoustic processes,” enabling the design of thermoacoustic engines and refrigerators. Motivated by the Fukushima nuclear reactor disaster, we have developed and tested a thermoacoustic engine that exploits the energy-rich conditions in the core of a nuclear reactor to provide core condition information to the operators without a need for external electrical power. The heat engine is self-poweredmore » and can wirelessly transmit the temperature and reactor power level by generation of a pure tone which can be detected outside the reactor. We report here the first use of a fission-powered thermoacoustic engine capable of serving as a performance and safety sensor in the core of a research reactor and present data from the hydrophones in the coolant (far from the core) and an accelerometer attached to a structure outside the reactor. These measurements confirmed that the frequency of the sound produced indicates the reactor’s coolant temperature and that the amplitude (above an onset threshold) is related to the reactor’s operating power level. Furthermore, these signals can be detected even in the presence of substantial background noise generated by the reactor’s fluid pumps.« less

  6. Sound propagation from a ridge wind turbine across a valley.

    PubMed

    Van Renterghem, Timothy

    2017-04-13

    Sound propagation outdoors can be strongly affected by ground topography. The existence of hills and valleys between a source and receiver can lead to the shielding or focusing of sound waves. Such effects can result in significant variations in received sound levels. In addition, wind speed and air temperature gradients in the atmospheric boundary layer also play an important role. All of the foregoing factors can become especially important for the case of wind turbines located on a ridge overlooking a valley. Ridges are often selected for wind turbines in order to increase their energy capture potential through the wind speed-up effects often experienced in such locations. In this paper, a hybrid calculation method is presented to model such a case, relying on an analytical solution for sound diffraction around an impedance cylinder and the conformal mapping (CM) Green's function parabolic equation (GFPE) technique. The various aspects of the model have been successfully validated against alternative prediction methods. Example calculations with this hybrid analytical-CM-GFPE model show the complex sound pressure level distribution across the valley and the effect of valley ground type. The proposed method has the potential to include the effect of refraction through the inclusion of complex wind and temperature fields, although this aspect has been highly simplified in the current simulations.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  7. Interaction of Sound from Supersonic Jets with Nearby Structures

    NASA Technical Reports Server (NTRS)

    Fenno, C. C., Jr.; Bayliss, A.; Maestrello, L.

    1997-01-01

    A model of sound generated in an ideally expanded supersonic (Mach 2) jet is solved numerically. Two configurations are considered: (1) a free jet and (2) an installed jet with a nearby array of flexible aircraft type panels. In the later case the panels vibrate in response to loading by sound from the jet and the full coupling between the panels and the jet is considered, accounting for panel response and radiation. The long time behavior of the jet is considered. Results for near field and far field disturbance, the far field pressure and the vibration of and radiation from the panels are presented. Panel response crucially depends on the location of the panels. Panels located upstream of the Mach cone are subject to a low level, nearly continuous spectral excitation and consequently exhibit a low level, relatively continuous spectral response. In contrast, panels located within the Mach cone are subject to a significant loading due to the intense Mach wave radiation of sound and exhibit a large, relatively peaked spectral response centered around the peak frequency of sound radiation. The panels radiate in a similar fashion to the sound in the jet, in particular exhibiting a relatively peaked spectral response at approximately the Mach angle from the bounding wall.

  8. Urban sound energy reduction by means of sound barriers

    NASA Astrophysics Data System (ADS)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  9. Dredged Material Management in Long Island Sound

    EPA Pesticide Factsheets

    Information on Western and Central Long Island Sound Dredged Material Disposal Sites including the Dredged Material Management Plan and Regional Dredging Team. Information regarding the Eastern Long Island Sound Selected Site including public meetings.

  10. Faster quantum walk search on a weighted graph

    NASA Astrophysics Data System (ADS)

    Wong, Thomas G.

    2015-09-01

    A randomly walking quantum particle evolving by Schrödinger's equation searches for a unique marked vertex on the "simplex of complete graphs" in time Θ (N3 /4) . We give a weighted version of this graph that preserves vertex transitivity, and we show that the time to search on it can be reduced to nearly Θ (√{N }) . To prove this, we introduce two extensions to degenerate perturbation theory: an adjustment that distinguishes the weights of the edges and a method to determine how precisely the jumping rate of the quantum walk must be chosen.

  11. Usefulness of bowel sound auscultation: a prospective evaluation.

    PubMed

    Felder, Seth; Margel, David; Murrell, Zuri; Fleshner, Phillip

    2014-01-01

    Although the auscultation of bowel sounds is considered an essential component of an adequate physical examination, its clinical value remains largely unstudied and subjective. The aim of this study was to determine whether an accurate diagnosis of normal controls, mechanical small bowel obstruction (SBO), or postoperative ileus (POI) is possible based on bowel sound characteristics. Prospectively collected recordings of bowel sounds from patients with normal gastrointestinal motility, SBO diagnosed by computed tomography and confirmed at surgery, and POI diagnosed by clinical symptoms and a computed tomography without a transition point. Study clinicians were instructed to categorize the patient recording as normal, obstructed, ileus, or not sure. Using an electronic stethoscope, bowel sounds of healthy volunteers (n = 177), patients with SBO (n = 19), and patients with POI (n = 15) were recorded. A total of 10 recordings randomly selected from each category were replayed through speakers, with 15 of the recordings duplicated to surgical and internal medicine clinicians (n = 41) blinded to the clinical scenario. The sensitivity, positive predictive value, and intra-rater variability were determined based on the clinician's ability to properly categorize the bowel sound recording when blinded to additional clinical information. Secondary outcomes were the clinician's perceived level of expertise in interpreting bowel sounds. The overall sensitivity for normal, SBO, and POI recordings was 32%, 22%, and 22%, respectively. The positive predictive value of normal, SBO, and POI recordings was 23%, 28%, and 44%, respectively. Intra-rater reliability of duplicated recordings was 59%, 52%, and 53% for normal, SBO, and POI, respectively. No statistically significant differences were found between the surgical and internal medicine clinicians for sensitivity, positive predictive value, or intra-rater variability. Overall, 44% of clinicians reported that they rarely listened

  12. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  13. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  14. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  15. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  16. Sound-Symbolism Boosts Novel Word Learning

    ERIC Educational Resources Information Center

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  17. Evaluating Warning Sound Urgency with Reaction Times

    ERIC Educational Resources Information Center

    Suied, Clara; Susini, Patrick; McAdams, Stephen

    2008-01-01

    It is well-established that subjective judgments of perceived urgency of alarm sounds can be affected by acoustic parameters. In this study, the authors investigated an objective measurement, the reaction time (RT), to test the effectiveness of temporal parameters of sounds in the context of warning sounds. Three experiments were performed using a…

  18. Sound production in the clownfish Amphiprion clarkii.

    PubMed

    Parmentier, Eric; Colleye, Orphal; Fine, Michael L; Frédérich, Bruno; Vandewalle, Pierre; Herrel, Anthony

    2007-05-18

    Although clownfish sounds were recorded as early as 1930, the mechanism of sound production has remained obscure. Yet, clownfish are prolific "singers" that produce a wide variety of sounds, described as "chirps" and "pops" in both reproductive and agonistic behavioral contexts. Here, we describe the sonic mechanism of the clownfish Amphiprion clarkii.

  19. A Lexical Analysis of Environmental Sound Categories

    ERIC Educational Resources Information Center

    Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel

    2012-01-01

    In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…

  20. Bubbles That Change the Speed of Sound

    ERIC Educational Resources Information Center

    Planinsic, Gorazd; Etkina, Eugenia

    2012-01-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."…

  1. The Early Years: Becoming Attuned to Sound

    ERIC Educational Resources Information Center

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  2. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  3. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  4. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  5. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  6. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  7. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  8. A Simple Experiment to Explore Standing Waves in a Flexible Corrugated Sound Tube

    NASA Astrophysics Data System (ADS)

    Amorim, Maria Eva; Sousa, Teresa Delmira; Carvalho, P. Simeão; Sousa, Adriano Sampaioe

    2011-09-01

    Sound tubes, pipes, and singing rods are used as musical instruments and as toys to perform amusing experiments. In particular, corrugated tubes present unique characteristics with respect to the sounds they can produce; that is why they have been studied so intensively, both at theoretical and experimental levels.1-4 Experimental studies usually involve expensive and sophisticated equipment that is out of reach of school laboratory facilities.3-6 In this paper we show how to investigate quantitatively the sounds produced by a flexible sound tube corrugated on the inside by using educational equipment readily available in school laboratories, such as the oscilloscope, the microphone, the anemometer, and the air pump. We show that it is possible for students to study the discontinuous spectrum of sounds produced by a flexible corrugated tube and go even further, computing the speed of sound in air with a simple experimental procedure.

  9. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  10. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  11. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  12. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  13. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  14. Sound Beams with Shockwave Pulses

    NASA Astrophysics Data System (ADS)

    Enflo, B. O.

    2000-11-01

    The beam equation for a sound beam in a diffusive medium, called the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, has a class of solutions, which are power series in the transverse variable with the terms given by a solution of a generalized Burgers’ equation. A free parameter in this generalized Burgers’ equation can be chosen so that the equation describes an N-wave which does not decay. If the beam source has the form of a spherical cap, then a beam with a preserved shock can be prepared. This is done by satisfying an inequality containing the spherical radius, the N-wave pulse duration, the N-wave pulse amplitude, and the sound velocity in the fluid.

  15. Sparse representation of Gravitational Sound

    NASA Astrophysics Data System (ADS)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  16. Thermoacoustic sound projector: exceeding the fundamental efficiency of carbon nanotubes.

    PubMed

    Aliev, Ali E; Codoluto, Daniel; Baughman, Ray H; Ovalle-Robles, Raquel; Inoue, Kanzan; Romanov, Stepan A; Nasibulin, Albert G; Kumar, Prashant; Priya, Shashank; Mayo, Nathanael K; Blottman, John B

    2018-08-10

    The combination of smooth, continuous sound spectra produced by a sound source having no vibrating parts, a nanoscale thickness of a flexible active layer and the feasibility of creating large, conformal projectors provoke interest in thermoacoustic phenomena. However, at low frequencies, the sound pressure level (SPL) and the sound generation efficiency of an open carbon nanotube sheet (CNTS) is low. In addition, the nanoscale thickness of fragile heating elements, their high sensitivity to the environment and the high surface temperatures practical for thermoacoustic sound generation necessitate protective encapsulation of a freestanding CNTS in inert gases. Encapsulation provides the desired increase of sound pressure towards low frequencies. However, the protective enclosure restricts heat dissipation from the resistively heated CNTS and the interior of the encapsulated device. Here, the heat dissipation issue is addressed by short pulse excitations of the CNTS. An overall increase of energy conversion efficiency by more than four orders (from 10 -5 to 0.1) and the SPL of 120 dB re 20 μPa @ 1 m in air and 170 dB re 1 μPa @ 1 m in water were demonstrated. The short pulse excitation provides a stable linear increase of output sound pressure with substantially increased input power density (>2.5 W cm -2 ). We provide an extensive experimental study of pulse excitations in different thermodynamic regimes for freestanding CNTSs with varying thermal inertias (single-walled and multiwalled with varying diameters and numbers of superimposed sheet layers) in vacuum and in air. The acoustical and geometrical parameters providing further enhancement of energy conversion efficiency are discussed.

  17. Hierarchical neurocomputations underlying concurrent sound segregation: connecting periphery to percept.

    PubMed

    Bidelman, Gavin M; Alain, Claude

    2015-02-01

    Natural soundscapes often contain multiple sound sources at any given time. Numerous studies have reported that in human observers, the perception and identification of concurrent sounds is paralleled by specific changes in cortical event-related potentials (ERPs). Although these studies provide a window into the cerebral mechanisms governing sound segregation, little is known about the subcortical neural architecture and hierarchy of neurocomputations that lead to this robust perceptual process. Using computational modeling, scalp-recorded brainstem/cortical ERPs, and human psychophysics, we demonstrate that a primary cue for sound segregation, i.e., harmonicity, is encoded at the auditory nerve level within tens of milliseconds after the onset of sound and is maintained, largely untransformed, in phase-locked activity of the rostral brainstem. As then indexed by auditory cortical responses, (in)harmonicity is coded in the signature and magnitude of the cortical object-related negativity (ORN) response (150-200 ms). The salience of the resulting percept is then captured in a discrete, categorical-like coding scheme by a late negativity response (N5; ~500 ms latency), just prior to the elicitation of a behavioral judgment. Subcortical activity correlated with cortical evoked responses such that weaker phase-locked brainstem responses (lower neural harmonicity) generated larger ORN amplitude, reflecting the cortical registration of multiple sound objects. Studying multiple brain indices simultaneously helps illuminate the mechanisms and time-course of neural processing underlying concurrent sound segregation and may lead to further development and refinement of physiologically driven models of auditory scene analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    PubMed

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  19. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    PubMed Central

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  20. Sound at the zoo: Using animal monitoring, sound measurement, and noise reduction in zoo animal management.

    PubMed

    Orban, David A; Soltis, Joseph; Perkins, Lori; Mellen, Jill D

    2017-05-01

    A clear need for evidence-based animal management in zoos and aquariums has been expressed by industry leaders. Here, we show how individual animal welfare monitoring can be combined with measurement of environmental conditions to inform science-based animal management decisions. Over the last several years, Disney's Animal Kingdom® has been undergoing significant construction and exhibit renovation, warranting institution-wide animal welfare monitoring. Animal care and science staff developed a model that tracked animal keepers' daily assessments of an animal's physical health, behavior, and responses to husbandry activity; these data were matched to different external stimuli and environmental conditions, including sound levels. A case study of a female giant anteater and her environment is presented to illustrate how this process worked. Associated with this case, several sound-reducing barriers were tested for efficacy in mitigating sound. Integrating daily animal welfare assessment with environmental monitoring can lead to a better understanding of animals and their sensory environment and positively impact animal welfare. © 2017 Wiley Periodicals, Inc.