33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signal tests. 67.10-20... NAVIGATION AIDS TO NAVIGATION ON ARTIFICIAL ISLANDS AND FIXED STRUCTURES General Requirements for Sound signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
Tanaka, Kazunori; Ogawa, Munehiro; Inagaki, Yusuke; Tanaka, Yasuhito; Nishikawa, Hitoshi; Hattori, Koji
2017-05-01
The Lachman test is clinically considered to be a reliable physical examination for anterior cruciate ligament (ACL) deficiency. However, the test involves subjective judgement of differences in tibial translation and endpoint quality. An auscultation system has been developed to allow assessment of the Lachman test. The knee joint sound during the Lachman test was analyzed using fast Fourier transformation. The purpose of the present study was to quantitatively evaluate knee joint sounds in healthy and ACL-deficient human knees. Sixty healthy volunteers and 24 patients with ACL injury were examined. The Lachman test with joint auscultation was evaluated using a microphone. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of the Lachman sound, the peak sound (Lachman peak sound) as the maximum relative amplitude (acoustic pressure) and its frequency were used. In healthy volunteers, the mean Lachman peak sound of intact knees was 100.6 Hz in frequency and -45 dB in acoustic pressure. Moreover, a sex difference was found in the frequency of the Lachman peak sound. In patients with ACL injury, the frequency of the Lachman peak sound of the ACL-deficient knees was widely dispersed. In the ACL-deficient knees, the mean Lachman peak sound was 306.8 Hz in frequency and -63.1 dB in acoustic pressure. If the reference range was set at the frequency of the healthy volunteer Lachman peak sound, the sensitivity, specificity, positive predictive value, and negative predictive value were 83.3%, 95.6%, 95.2%, and 85.2%, respectively. Knee joint auscultation during the Lachman test was capable of judging ACL deficiency on the basis of objective data. In particular, the frequency of the Lachman peak sound was able to assess ACL condition. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N.
2012-01-01
Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients. PMID:22891070
Recognition of Modified Conditioning Sounds by Competitively Trained Guinea Pigs
Ojima, Hisayuki; Horikawa, Junsei
2016-01-01
The guinea pig (GP) is an often-used species in hearing research. However, behavioral studies are rare, especially in the context of sound recognition, because of difficulties in training these animals. We examined sound recognition in a social competitive setting in order to examine whether this setting could be used as an easy model. Two starved GPs were placed in the same training arena and compelled to compete for food after hearing a conditioning sound (CS), which was a repeat of almost identical sound segments. Through a 2-week intensive training, animals were trained to demonstrate a set of distinct behaviors solely to the CS. Then, each of them was subjected to generalization tests for recognition of sounds that had been modified from the CS in spectral, fine temporal and tempo (i.e., intersegment interval, ISI) dimensions. Results showed that they discriminated between the CS and band-rejected test sounds but had no preference for a particular frequency range for the recognition. In contrast, sounds modified in the fine temporal domain were largely perceived to be in the same category as the CS, except for the test sound generated by fully reversing the CS in time. Animals also discriminated sounds played at different tempos. Test sounds with ISIs shorter than that of the multi-segment CS were discriminated from the CS, while test sounds with ISIs longer than that of the CS segments were not. For the shorter ISIs, most animals initiated apparently positive food-access behavior as they did in response to the CS, but discontinued it during the sound-on period probably because of later recognition of tempo. Interestingly, the population range and mean of the delay time before animals initiated the food-access behavior were very similar among different ISI test sounds. This study, for the first time, demonstrates a wide aspect of sound discrimination abilities of the GP and will provide a way to examine tempo perception mechanisms using this animal species. PMID:26858617
Can joint sound assess soft and hard endpoints of the Lachman test?: A preliminary study.
Hattori, Koji; Ogawa, Munehiro; Tanaka, Kazunori; Matsuya, Ayako; Uematsu, Kota; Tanaka, Yasuhito
2016-05-12
The Lachman test is considered to be a reliable physical examination for anterior cruciate ligament (ACL) injury. Patients with a damaged ACL demonstrate a soft endpoint feeling. However, examiners judge the soft and hard endpoints subjectively. The purpose of our study was to confirm objective performance of the Lachman test using joint auscultation. Human and porcine knee joints were examined. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of Lachman sound, the peak sound as the maximum relative amplitude (acoustic pressure) and its frequency were used. The mean Lachman peak sound for healthy volunteer knees was 86.9 ± 12.9 Hz in frequency and -40 ± 2.5 dB in acoustic pressure. The mean Lachman peak sound for intact porcine knees was 84.1 ± 9.4 Hz and -40.5 ± 1.7 dB. Porcine knees with ACL deficiency had a soft endpoint feeling during the Lachman test. The Lachman peak sounds of porcine knees with ACL deficiency were dispersed into four distinct groups, with center frequencies of around 40, 160, 450, and 1600. The Lachman peak sound was capable of assessing soft and hard endpoints of the Lachman test objectively.
Environmental Sound Training in Cochlear Implant Users
Sheft, Stanley; Kuvadia, Sejal; Gygi, Brian
2015-01-01
Purpose The study investigated the effect of a short computer-based environmental sound training regimen on the perception of environmental sounds and speech in experienced cochlear implant (CI) patients. Method Fourteen CI patients with the average of 5 years of CI experience participated. The protocol consisted of 2 pretests, 1 week apart, followed by 4 environmental sound training sessions conducted on separate days in 1 week, and concluded with 2 posttest sessions, separated by another week without training. Each testing session included an environmental sound test, which consisted of 40 familiar everyday sounds, each represented by 4 different tokens, as well as the Consonant Nucleus Consonant (CNC) word test, and Revised Speech Perception in Noise (SPIN-R) sentence test. Results Environmental sounds scores were lower than for either of the speech tests. Following training, there was a significant average improvement of 15.8 points in environmental sound perception, which persisted 1 week later after training was discontinued. No significant improvements were observed for either speech test. Conclusions The findings demonstrate that environmental sound perception, which remains problematic even for experienced CI patients, can be improved with a home-based computer training regimen. Such computer-based training may thus provide an effective low-cost approach to rehabilitation for CI users, and potentially, other hearing impaired populations. PMID:25633579
Auditory enhancement of increments in spectral amplitude stems from more than one source.
Carcagno, Samuele; Semal, Catherine; Demany, Laurent
2012-10-01
A component of a test sound consisting of simultaneous pure tones perceptually "pops out" if the test sound is preceded by a copy of itself with that component attenuated. Although this "enhancement" effect was initially thought to be purely monaural, it is also observable when the test sound and the precursor sound are presented contralaterally (i.e., to opposite ears). In experiment 1, we assessed the magnitude of ipsilateral and contralateral enhancement as a function of the time interval between the precursor and test sounds (10, 100, or 600 ms). The test sound, randomly transposed in frequency from trial to trial, was followed by a probe tone, either matched or mismatched in frequency to the test sound component which was the target of enhancement. Listeners' ability to discriminate matched probes from mismatched probes was taken as an index of enhancement magnitude. The results showed that enhancement decays more rapidly for ipsilateral than for contralateral precursors, suggesting that ipsilateral enhancement and contralateral enhancement stem from at least partly different sources. It could be hypothesized that, in experiment 1, contralateral precursors were effective only because they provided attentional cues about the target tone frequency. In experiment 2, this hypothesis was tested by presenting the probe tone before the precursor sound rather than after the test sound. Although the probe tone was then serving as a frequency cue, contralateral precursors were again found to produce enhancement. This indicates that contralateral enhancement cannot be explained by cuing alone and is a genuine sensory phenomenon.
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...
Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.
Ting, H; Yunus, J; Mohd Nordin, M Z
2005-01-01
The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.
Acoustical characteristics of the NASA Langley full scale wind tunnel test section
NASA Technical Reports Server (NTRS)
Abrahamson, A. L.; Kasper, P. K.; Pappa, R. S.
1975-01-01
The full-scale wind tunnel at NASA-Langley Research Center was designed for low-speed aerodynamic testing of aircraft. Sound absorbing treatment has been added to the ceiling and walls of the tunnel test section to create a more anechoic condition for taking acoustical measurements during aerodynamic tests. The results of an experimental investigation of the present acoustical characteristics of the tunnel test section are presented. The experimental program included measurements of ambient nosie levels existing during various tunnel operating conditions, investigation of the sound field produced by an omnidirectional source, and determination of sound field decay rates for impulsive noise excitation. A comparison of the current results with previous measurements shows that the added sound treatment has improved the acoustical condition of the tunnel test section. An analysis of the data indicate that sound reflections from the tunnel ground-board platform could create difficulties in the interpretation of actual test results.
Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task
Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.
2012-01-01
To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030
40 CFR 205.54-1 - Low speed sound emission test procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Low speed sound emission test....54-1 Low speed sound emission test procedures. (a) Instrumentation. The following instrumentation... checked annually to verify that its output has not changed. (3) An engine-speed tachometer which is...
Sound absorption coefficient of coal bottom ash concrete for railway application
NASA Astrophysics Data System (ADS)
Ramzi Hannan, N. I. R.; Shahidan, S.; Maarof, Z.; Ali, N.; Abdullah, S. R.; Ibrahim, M. H. Wan
2017-11-01
A porous concrete able to reduce the sound wave that pass through it. When a sound waves strike a material, a portion of the sound energy was reflected back and another portion of the sound energy was absorbed by the material while the rest was transmitted. The larger portion of the sound wave being absorbed, the lower the noise level able to be lowered. This study is to investigate the sound absorption coefficient of coal bottom ash (CBA) concrete compared to the sound absorption coefficient of normal concrete by carried out the impedance tube test. Hence, this paper presents the result of the impedance tube test of the CBA concrete and normal concrete.
2004-2006 Puget Sound Traffic Choices Study | Transportation Secure Data
Center | NREL 04-2006 Puget Sound Traffic Choices Study 2004-2006 Puget Sound Traffic Choices Study The 2004-2006 Puget Sound Traffic Choices Study tested the hypothesis that time-of-day variable Administration for a pilot project on congestion-based tolling. Methodology To test the hypothesis, the study
Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z
2016-08-01
We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.
Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section
NASA Technical Reports Server (NTRS)
Brooks, T. F.; Scheiman, J.; Silcox, R. J.
1976-01-01
Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.
NASA Technical Reports Server (NTRS)
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1986-01-01
The validity of the room equation of Crocker and Price (1982) for predicting the cabin interior sound pressure level was experimentally tested using a specially constructed setup for simultaneous measurements of transmitted sound intensity and interior sound pressure levels. Using measured values of the reverberation time and transmitted intensities, the equation was used to predict the space-averaged interior sound pressure level for three different fuselage conditions. The general agreement between the room equation and experimental test data is considered good enough for this equation to be used for preliminary design studies.
Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform
NASA Astrophysics Data System (ADS)
Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo
2010-08-01
A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.
Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian
2016-01-01
Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791
Multidimensional Approach to the Development of a Mandarin Chinese-Oriented Sound Test
ERIC Educational Resources Information Center
Hung, Yu-Chen; Lin, Chun-Yi; Tsai, Li-Chiun; Lee, Ya-Jung
2016-01-01
Purpose: Because the Ling six-sound test is based on American English phonemes, it can yield unreliable results when administered to non-English speakers. In this study, we aimed to improve specifically the diagnostic palette for Mandarin Chinese users by developing an adapted version of the Ling six-sound test. Method: To determine the set of…
30 CFR 75.211 - Roof testing and scaling.
Code of Federal Regulations, 2011 CFR
2011-07-01
... examination does not disclose a hazardous condition, sound and vibration roof tests, or other equivalent tests, shall be made where supports are to be installed. When sound and vibration tests are made, they shall be...
30 CFR 75.211 - Roof testing and scaling.
Code of Federal Regulations, 2014 CFR
2014-07-01
... examination does not disclose a hazardous condition, sound and vibration roof tests, or other equivalent tests, shall be made where supports are to be installed. When sound and vibration tests are made, they shall be...
30 CFR 75.211 - Roof testing and scaling.
Code of Federal Regulations, 2012 CFR
2012-07-01
... examination does not disclose a hazardous condition, sound and vibration roof tests, or other equivalent tests, shall be made where supports are to be installed. When sound and vibration tests are made, they shall be...
30 CFR 75.211 - Roof testing and scaling.
Code of Federal Regulations, 2013 CFR
2013-07-01
... examination does not disclose a hazardous condition, sound and vibration roof tests, or other equivalent tests, shall be made where supports are to be installed. When sound and vibration tests are made, they shall be...
Nachtigall, Paul E; Supin, Alexander Ya; Estaban, Jose-Antonio; Pacini, Aude F
2016-02-01
Ice-dwelling beluga whales are increasingly being exposed to anthropogenic loud sounds. Beluga's hearing sensitivity measured during a warning sound just preceding a loud sound was tested using pip-train stimuli and auditory evoked potential recording. When the test/warning stimulus with a frequency of 32 or 45 kHz preceded the loud sound with a frequency of 32 kHz and a sound pressure level of 153 dB re 1 μPa, 2 s, hearing thresholds before the loud sound increased relative to the baseline. The threshold increased up to 15 dB for the test frequency of 45 kHz and up to 13 dB for the test frequency of 32 kHz. These threshold increases were observed during two sessions of 36 trials each. Extinction tests revealed no change during three experimental sessions followed by a jump-like return to baseline thresholds. The low exposure level producing the hearing-dampening effect (156 dB re 1 µPa(2)s in each trial), and the manner of extinction, may be considered as evidence that the observed hearing threshold increases were a demonstration of conditioned dampening of hearing when the whale anticipated the quick appearance of a loud sound in the same way demonstrated in the false killer whale and bottlenose dolphin.
NASA Rat Acoustic Tolerance Test 1994-1995: 8 kHz, 16 kHz, 32 kHz Experiments
NASA Technical Reports Server (NTRS)
Mele, Gary D.; Holley, Daniel C.; Naidu, Sujata
1996-01-01
Adult male Sprague-Dawley rats were exposed to chronic applied sound (74 to 79 dB, SPL) with octave band center frequencies of either 8, 16 or 32 kHz for up to 60 days. Control cages had ambient sound levels of about 62 dB (SPL). Groups of rats (test vs. control; N=9 per group) were euthanized after 0. 5. 14, 30, and 60 days. On each euthanasia day, objective evaluation of their physiology and behavior was performed using a Stress Assessment Battery (SAB) of measures. In addition, rat hearing was assessed using the brain stem auditory evoked potential (BAER) method after 60 days of exposure. No statistically significant differences in mean daily food use could be attributed to the presence of the applied test sound. Test rats used 5% more water than control rats. In the 8 kHz and 32 kHz tests this amount was statistically significant(P less than .05). This is a minor difference of questionable physiological significance. However, it may be an indication of a small reaction to the constant applied sound. Across all test frequencies, day 5 test rats had 6% larger spleens than control rats. No other body or organ weight differences were found to be statistically significant with respect to the application of sound. This spleen effect may be a transient adaptive process related to adaptation to the constant applied noise. No significant test effect on differential white blood cell counts could be demonstrated. One group demonstrated a low eosinophil count (16 kHz experiment, day 14 test group). However this was highly suspect. Across all test frequencies studied, day 5 test rats had 17% fewer total leukocytes than day 5 control rats. Sound exposed test rats exhibited 44% lower plasma corticosterone concentrations than did control rats. Note that the plasma corticosterone concentration was lower in the sound exposed test animals than the control animals in every instance (frequency exposure and number of days exposed).
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
ERIC Educational Resources Information Center
Eshach, Haim
2014-01-01
This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound…
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
Swanepoel, De Wet; Matthysen, Cornelia; Eikelboom, Robert H; Clark, Jackie L; Hall, James W
2015-01-01
Accessibility of audiometry is hindered by the cost of sound booths and shortage of hearing health personnel. This study investigated the validity of an automated mobile diagnostic audiometer with increased attenuation and real-time noise monitoring for clinical testing outside a sound booth. Attenuation characteristics and reference ambient noise levels for the computer-based audiometer (KUDUwave) was evaluated alongside the validity of environmental noise monitoring. Clinical validity was determined by comparing air- and bone-conduction thresholds obtained inside and outside the sound booth (23 subjects). Twenty-three normal-hearing subjects (age range, 20-75 years; average age 35.5) and a sub group of 11 subjects to establish test-retest reliability. Improved passive attenuation and valid environmental noise monitoring was demonstrated. Clinically, air-conduction thresholds inside and outside the sound booth, corresponded within 5 dB or less > 90% of instances (mean absolute difference 3.3 ± 3.2 SD). Bone conduction thresholds corresponded within 5 dB or less in 80% of comparisons between test environments, with a mean absolute difference of 4.6 dB (3.7 SD). Threshold differences were not statistically significant. Mean absolute test-retest differences outside the sound booth was similar to those in the booth. Diagnostic pure-tone audiometry outside a sound booth, using automated testing, improved passive attenuation, and real-time environmental noise monitoring demonstrated reliable hearing assessments.
49 CFR Appendix D to Part 227 - Audiometric Test Rooms
Code of Federal Regulations, 2011 CFR
2011-10-01
... sound pressure levels exceeding those in Table D-1 when measured by equipment conforming at least to the..._regulations/ibr_locations.html. Table D-1—Maximum Allowable Octave-Band Sound Pressure Levels for Audiometric Test Rooms Octave-band center frequency (Hz) 500 1000 2000 4000 8000 Sound pressure levels—supra-aural...
49 CFR Appendix D to Part 227 - Audiometric Test Rooms
Code of Federal Regulations, 2010 CFR
2010-10-01
... sound pressure levels exceeding those in Table D-1 when measured by equipment conforming at least to the..._regulations/ibr_locations.html. Table D-1—Maximum Allowable Octave-Band Sound Pressure Levels for Audiometric Test Rooms Octave-band center frequency (Hz) 500 1000 2000 4000 8000 Sound pressure levels—supra-aural...
Makeyev, Oleksandr; Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Ed; Neuman, Michael
2007-01-01
In this paper we propose a sound recognition technique based on the limited receptive area (LIRA) neural classifier and continuous wavelet transform (CWT). LIRA neural classifier was developed as a multipurpose image recognition system. Previous tests of LIRA demonstrated good results in different image recognition tasks including: handwritten digit recognition, face recognition, metal surface texture recognition, and micro work piece shape recognition. We propose a sound recognition technique where scalograms of sound instances serve as inputs of the LIRA neural classifier. The methodology was tested in recognition of swallowing sounds. Swallowing sound recognition may be employed in systems for automated swallowing assessment and diagnosis of swallowing disorders. The experimental results suggest high efficiency and reliability of the proposed approach.
Preliminary laboratory testing on the sound absorption of coupled cavity sonic crystal
NASA Astrophysics Data System (ADS)
Kristiani, R.; Yahya, I.; Harjana; Suparmi
2016-11-01
This paper focuses on the sound absorption performance of coupled cavity sonic crystal. It constructed by a pair of a cylindrical tube with different values in diameters. A laboratory test procedure after ASTM E1050 has been conducted to measure the sound absorption of the sonic crystal elements. The test procedures were implemented to a single coupled scatterer and also to a pair of similar structure. The results showed that using the paired structure bring a better possibility for increase the sound absorption to a wider absorption range. It also bring a practical advantage for setting the local Helmholtz resonant frequency to certain intended frequency.
49 CFR 210.31 - Operation standards (stationary locomotives at 30 meters).
Code of Federal Regulations, 2011 CFR
2011-10-01
... prescribed in paragraph (a)(2) of this section, the A-weighted sound level reading in decibels shall be... A-weighted sound level reading in decibels that is observed during the 30-second period of time... test; (3) Date of test; and (4) The A-weighted sound level reading in decibels obtained during the...
Going wireless and booth-less for hearing testing in industry.
Meinke, Deanna K; Norris, Jesse A; Flynn, Brendan P; Clavier, Odile H
2017-01-01
To assess the test-retest variability of hearing thresholds obtained with an innovative, mobile wireless automated hearing-test system (WAHTS) with enhanced sound attenuation to test industrial workers at a worksite as compared to standardised automated hearing thresholds obtained in a mobile trailer sound booth. A within-subject repeated-measures design was used to compare air-conducted threshold tests (500-8000 Hz) measured with the WAHTS in six workplace locations, and a third test using computer-controlled audiometry obtained in a mobile trailer sound booth. Ambient noise levels were measured in all test environments. Twenty workers served as listeners and 20 workers served as operators. On average, the WAHTS resulted in equivalent thresholds as the mobile trailer audiometry at 1000, 2000, 3000 and 8000 Hz and thresholds were within ±5 dB at 500, 4000 and 6000 Hz. Comparable performance may be obtained with the WAHTS in occupational audiometry and valid thresholds may be obtained in diverse test locations without the use of sound-attenuating enclosures.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-16
...; high-pitched sounds contain high frequencies and low-pitched sounds contain low frequencies. Natural... estimated to occur between approximately 150 Hz and 160 kHz. High-frequency cetaceans (eight species of true... masking by high frequency sound. Human data indicate low-frequency sound can mask high-frequency sounds (i...
NASA Astrophysics Data System (ADS)
Yahya, I.; Kusuma, J. I.; Harjana; Kristiani, R.; Hanina, R.
2016-02-01
This paper emphasizes the influence of tubular shaped microresonators phononic crystal insertion on the sound absorption coefficient of profiled sound absorber. A simple cubic and two different bodies centered cubic phononic crystal lattice model were analyzed in a laboratory test procedure. The experiment was conducted by using transfer function based two microphone impedance tube method refer to ASTM E-1050-98. The results show that sound absorption coefficient increase significantly at the mid and high-frequency band (600 - 700 Hz) and (1 - 1.6 kHz) when tubular shaped microresonator phononic crystal inserted into the tested sound absorber element. The increment phenomena related to multi-resonance effect that occurs when sound waves propagate through the phononic crystal lattice model that produce multiple reflections and scattering in mid and high-frequency band which increases the sound absorption coefficient accordingly
Acceptability of VTOL aircraft noise determined by absolute subjective testing
NASA Technical Reports Server (NTRS)
Sternfeld, H., Jr.; Hinterkeuser, E. G.; Hackman, R. B.; Davis, J.
1972-01-01
A program was conducted during which test subjects evaluated the simulated sounds of a helicopter, a tilt wing aircraft, and a 15 second, 90 PNdB (indoors) turbojet aircraft used as reference. Over 20,000 evaluations were made while the test subjects were engaged in work and leisure activities. The effects of level, exposure time, distance and aircraft design on subjective acceptability were evaluated. Some of the important conclusions are: (1) To be judged equal in annoyance to the reference jet sound, the helicopter and tilt wing sounds must be 4 to 5 PNdB lower when lasting 15 seconds in duration. (2) To be judged significantly more acceptable than the reference jet sound, the helicopter sound must be 10 PNdB lower when lasting 15 seconds in duration. (3) To be judged significantly more acceptable than the reference jet sound, the tilt wing sound must be 12 PNdB lower when lasting 15 seconds in duration. (4) The relative effect of changing the duration of a sound upon its subjectively rated annoyance diminishes with increasing duration. It varies from 2 PNdB per doubling of duration for intervals of 15 to 30 seconds, to 0.75 PNdB per doubling of duration for intervals of 120 to 240 seconds.
An analysis of sound absorbing linings for the interior of the NASA Ames 80 x 120-foot wind tunnel
NASA Technical Reports Server (NTRS)
Wilby, J. F.; White, P. H.
1985-01-01
It is desirable to achieve low frequency sound absorption in the tests section of the NASA Ames 80X120-ft wind tunnel. However, it is difficult to obtain information regarding sound absorption characteristics of potential treatments because of the restrictions placed on the dimensions of the test chambers. In the present case measurements were made in a large enclosure for aircraft ground run-up tests. The normal impedance of the acoustic treatment was measured using two microphones located close to the surface of the treatment. The data showed reasonably good agreement with analytical methods which were then used to design treatments for the wind tunnel test section. A sound-absorbing lining is proposed for the 80X120-ft wind tunnel.
Absorption of sound by tree bark
G. Reethof; L. D. Frank; O. H. McDaniel
1976-01-01
Laboratory tests were conducted with a standing wave tube to measure the acoustic absorption of normally incident sound by the bark of six species of trees. Twelve bark samples, 10 cm in diameter, were tested. Sound of seven frequencies between 400 and 1600 Hz was used in the measurements. Absorption was generally about 5 percent; it exceeded 10 percent for only three...
Automated lung sound analysis for detecting pulmonary abnormalities.
Datta, Shreyasi; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan
2017-07-01
Identification of pulmonary diseases comprises of accurate auscultation as well as elaborate and expensive pulmonary function tests. Prior arts have shown that pulmonary diseases lead to abnormal lung sounds such as wheezes and crackles. This paper introduces novel spectral and spectrogram features, which are further refined by Maximal Information Coefficient, leading to the classification of healthy and abnormal lung sounds. A balanced lung sound dataset, consisting of publicly available data and data collected with a low-cost in-house digital stethoscope are used. The performance of the classifier is validated over several randomly selected non-overlapping training and validation samples and tested on separate subjects for two separate test cases: (a) overlapping and (b) non-overlapping data sources in training and testing. The results reveal that the proposed method sustains an accuracy of 80% even for non-overlapping data sources in training and testing.
Laboratory Headphone Studies of Human Response to Low-Amplitude Sonic Booms and Rattle Heard Indoors
NASA Technical Reports Server (NTRS)
Loubeau, Alexandra; Sullivan, Brenda M.; Klos, Jacob; Rathsam, Jonathan; Gavin, Joseph R.
2013-01-01
Human response to sonic booms heard indoors is affected by the generation of contact-induced rattle noise. The annoyance caused by sonic boom-induced rattle noise was studied in a series of psychoacoustics tests. Stimuli were divided into three categories and presented in three different studies: isolated rattles at the same calculated Perceived Level (PL), sonic booms combined with rattles with the mixed sound at a single PL, and sonic booms combined with rattles with the mixed sound at three different PL. Subjects listened to sounds over headphones and were asked to report their annoyance. Annoyance to different rattles was shown to vary significantly according to rattle object size. In addition, the combination of low-amplitude sonic booms and rattles can be more annoying than the sonic boom alone. Correlations and regression analyses for the combined sonic boom and rattle sounds identified the Moore and Glasberg Stationary Loudness (MGSL) metric as a primary predictor of annoyance for the tested sounds. Multiple linear regression models were developed to describe annoyance to the tested sounds, and simplifications for applicability to a wider range of sounds are presented.
Learning-Related Shifts in Generalization Gradients for Complex Sounds
Wisniewski, Matthew G.; Church, Barbara A.; Mercado, Eduardo
2010-01-01
Learning to discriminate stimuli can alter how one distinguishes related stimuli. For instance, training an individual to differentiate between two stimuli along a single dimension can alter how that individual generalizes learned responses. In this study, we examined the persistence of shifts in generalization gradients after training with sounds. University students were trained to differentiate two sounds that varied along a complex acoustic dimension. Students subsequently were tested on their ability to recognize a sound they experienced during training when it was presented among several novel sounds varying along this same dimension. Peak shift was observed in Experiment 1 when generalization tests immediately followed training, and in Experiment 2 when tests were delayed by 24 hours. These findings further support the universality of generalization processes across species, modalities, and levels of stimulus complexity. They also raise new questions about the mechanisms underlying learning-related shifts in generalization gradients. PMID:19815929
The Effects of Phonetic Similarity and List Length on Children's Sound Categorization Performance.
ERIC Educational Resources Information Center
Snowling, Margaret J.; And Others
1994-01-01
Examined the phonological analysis and verbal working memory components of the sound categorization task and their relationships to reading skill differences. Children were tested on sound categorization by having them identify odd words in sequences. Sound categorization performance was sensitive to individual differences in speech perception…
Designing Trend-Monitoring Sounds for Helicopters: Methodological Issues and an Application
ERIC Educational Resources Information Center
Edworthy, Judy; Hellier, Elizabeth; Aldrich, Kirsteen; Loxley, Sarah
2004-01-01
This article explores methodological issues in sonification and sound design arising from the design of helicopter monitoring sounds. Six monitoring sounds (each with 5 levels) were tested for similarity and meaning with 3 different techniques: hierarchical cluster analysis, linkage analysis, and multidimensional scaling. In Experiment 1,…
40 CFR Appendix I to Subpart B of... - Appendix I to Subpart B of Part 205
Code of Federal Regulations, 2011 CFR
2011-07-01
...: Acceleration Test: Deceleration Test: Acceleration Test Run No. 1 2 3 4 5 dBA Left Right Highest RPM attained in End Zone Calculated Sound Pressure dBA Deceleration Test with Exhaust Brake Applied dBA Left Right Calculated Sound Pressure dBA TEST Personnel: (Name) Recorded By: Date:......... (Signature) Supervisor...
40 CFR Appendix I to Subpart B of... - Appendix I to Subpart B of Part 205
Code of Federal Regulations, 2010 CFR
2010-07-01
...: Acceleration Test: Deceleration Test: Acceleration Test Run No. 1 2 3 4 5 dBA Left Right Highest RPM attained in End Zone Calculated Sound Pressure dBA Deceleration Test with Exhaust Brake Applied dBA Left Right Calculated Sound Pressure dBA TEST Personnel: (Name) Recorded By: Date:......... (Signature) Supervisor...
Evaluation of selective attention in patients with misophonia.
Silva, Fúlvia Eduarda da; Sanchez, Tanit Ganz
2018-03-21
Misophonia is characterized by the aversion to very selective sounds, which evoke a strong emotional reaction. It has been inferred that misophonia, as well as tinnitus, is associated with hyperconnectivity between auditory and limbic systems. Individuals with bothersome tinnitus may have selective attention impairment, but it has not been demonstrated in case of misophonia yet. To characterize a sample of misophonic subjects and compare it with two control groups, one with tinnitus individuals (without misophonia) and the other with asymptomatic individuals (without misophonia and without tinnitus), regarding the selective attention. We evaluated 40 normal-hearing participants: 10 with misophonia, 10 with tinnitus (without misophonia) and 20 without tinnitus and without misophonia. In order to evaluate the selective attention, the dichotic sentence identification test was applied in three situations: firstly, the Brazilian Portuguese test was applied. Then, the same test was applied, combined with two competitive sounds: chewing sound (representing a sound that commonly triggers misophonia), and white noise (representing a common type of tinnitus which causes discomfort to patients). The dichotic sentence identification test with chewing sound, showed that the average of correct responses differed between misophonia and without tinnitus and without misophonia (p=0.027) and between misophonia and tinnitus (without misophonia) (p=0.002), in both cases lower in misophonia. Both, the dichotic sentence identification test alone, and with white noise, failed to show differences in the average of correct responses among the three groups (p≥0.452). The misophonia participants presented a lower percentage of correct responses in the dichotic sentence identification test with chewing sound; suggesting that individuals with misophonia may have selective attention impairment when they are exposed to sounds that trigger this condition. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Investigation of Liner Characteristics in the NASA Langley Curved Duct Test Rig
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Brown, Martha C.; Watson, Willie R.; Jones, Michael G.
2007-01-01
The Curved Duct Test Rig (CDTR), which is designed to investigate propagation of sound in a duct with flow, has been developed at NASA Langley Research Center. The duct incorporates an adaptive control system to generate a tone in the duct at a specific frequency with a target Sound Pressure Level and a target mode shape. The size of the duct, the ability to isolate higher order modes, and the ability to modify the duct configuration make this rig unique among experimental duct acoustics facilities. An experiment is described in which the facility performance is evaluated by measuring the sound attenuation by a sample duct liner. The liner sample comprises one wall of the liner test section. Sound in tones from 500 to 2400 Hz, with modes that are parallel to the liner surface of order 0 to 5, and that are normal to the liner surface of order 0 to 2, can be generated incident on the liner test section. Tests are performed in which sound is generated without axial flow in the duct and with flow at a Mach number of 0.275. The attenuation of the liner is determined by comparing the sound power in a hard wall section downstream of the liner test section to the sound power in a hard wall section upstream of the liner test section. These experimentally determined attenuations are compared to numerically determined attenuations calculated by means of a finite element analysis code. The code incorporates liner impedance values educed from measured data from the NASA Langley Grazing Incidence Tube, a test rig that is used for investigating liner performance with flow and with (0,0) mode incident grazing. The analytical and experimental results compare favorably, indicating the validity of the finite element method and demonstrating that finite element prediction tools can be used together with experiment to characterize the liner attenuation.
Assessment of noise metrics for application to rotorcraft
NASA Astrophysics Data System (ADS)
McMullen, Andrew L.
It is anticipated that the use of rotorcraft passenger vehicles for shorter journeys will increase because their use can reduce the time between boarding and take-off. The characteristics of rotorcraft noise are very different to that of fixed wing aircraft. There can be strong tonal components, fluctuations that can also make the noise sound impulsive, and future rotorcraft may produce proportionally more low frequency noise content. Most metrics that are used today to predict noise impact on communities around airports (e.g., Ldn) are just functions of A-weighted sound pressure level. To build a better noise annoyance model that can be applied to assess impact of future and current rotorcraft, it is important to understand the perceived sound attributes and how they influence annoyance. A series of psychoacoustic tests were designed and performed to further our understanding of how rotorcraft sound characteristics affect annoyance as well as evaluate the applicability of existing noise metrics as predictors of annoyance due to rotorcraft noise. The effect of the method used to reproduce sounds in the psychoacoustics tests was also investigated, and so tests were conducted in the NASA Langley Exterior Effects Room using loudspeaker arrays to simulate flyovers and in a double walled sound booth using earphones for playback. A semantic differential test was performed, and analysis of subject responses showed the presence of several independent perceptual factors relating to: loudness, sharpness, roughness, tonality, and impulsiveness. A simulation method was developed to alter tonal components in existing rotorcraft flyover recordings to change the impulsiveness and tonality of the sounds. Flyover recordings and simulations with varied attributes were used as stimuli in an annoyance test. Results showed that EPNL and SELA performed well as predictors of annoyance, but outliers to generate trends have tonal related characteristics that could be contributing to annoyance. General trends in results were similar for both test environments, though differences were greater for the annoyance tests than the semantic differential tests.
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the vehicle at an angle that is consistent with the recommendation of the system's manufacturer. If... systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The...
Articulation generalization of voiced-voiceless sounds in hearing-impaired children.
McReynolds, L V; Jetzke, E
1986-11-01
Eight hearing-impaired children participated in a study exploring the effect of training (+) or (-) voicing on generalization to cognates. In an experimental multiple baseline study across behaviors, children were trained on pairs of voiced and voiceless target sounds that they had previously omitted in final position. The pairs consisted of the /t/ and /g/ and the /d/ and /k/. When /t/ was trained, generalization was tested to (a) untrained words with the /t/ in the final position and (b) untrained words containing /d/ (the cognate) of the /t/. In like manner, when /d/ was trained, generalization was tested to both the /d/ and /t/ words. The /g/ and /k/ received identical treatment. A contrast procedure was used to teach the children to produce the final consonants. When training criterion was reached, generalization was tested. Results showed that 6 of the 8 children generalized both the voiced and unvoiced target sounds to 50% or more of the target sound probe items. Results also indicated that more generalization occurred to the voiceless cognate from voiced target sound training than occurred to voiced cognates from voiceless target sound training.
Acoustic analysis of swallowing sounds: a new technique for assessing dysphagia.
Santamato, Andrea; Panza, Francesco; Solfrizzi, Vincenzo; Russo, Anna; Frisardi, Vincenza; Megna, Marisa; Ranieri, Maurizio; Fiore, Pietro
2009-07-01
To perform acoustic analysis of swallowing sounds, using a microphone and a notebook computer system, in healthy subjects and patients with dysphagia affected by neurological diseases, testing the positive/negative predictive value of a pathological pattern of swallowing sounds for penetration/aspiration. Diagnostic test study, prospective, not blinded, with the penetration/aspiration evaluated by fibreoptic endoscopy of swallowing as criterion standard. Data from a previously recorded database of normal swallowing sounds for 60 healthy subjects according to gender, age, and bolus consistency was compared with those of 15 patients with dysphagia from a university hospital referral centre who were affected by various neurological diseases. Mean duration of the swallowing sounds and post-swallowing apnoea were recorded. Penetration/aspiration was verified by fibreoptic endoscopy of swallowing in all patients with dysphagia. The mean duration of swallowing sounds for a liquid bolus of 10 ml water was significantly different between patients with dysphagia and healthy patients. We also described patterns of swallowing sounds and tested the negative/positive predictive values of post-swallowing apnoea for penetration/aspiration verified by fibreoptic endoscopy of swallowing (sensitivity 0.67 (95% confidence interval 0.24-0.94); specificity 1.00 (95% confidence interval 0.56-1.00)). The proposed technique for recording and measuring swallowing sounds could be incorporated into the bedside evaluation, but it should not replace the use of more diagnostic and valuable measures.
Evaluating Warning Sound Urgency with Reaction Times
ERIC Educational Resources Information Center
Suied, Clara; Susini, Patrick; McAdams, Stephen
2008-01-01
It is well-established that subjective judgments of perceived urgency of alarm sounds can be affected by acoustic parameters. In this study, the authors investigated an objective measurement, the reaction time (RT), to test the effectiveness of temporal parameters of sounds in the context of warning sounds. Three experiments were performed using a…
Jiang, Tinglei; Long, Zhenyu; Ran, Xin; Zhao, Xue; Xu, Fei; Qiu, Fuyuan; Kanwal, Jagmeet S.
2016-01-01
ABSTRACT Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source. PMID:27815241
Sound Fields in Complex Listening Environments
2011-01-01
The conditions of sound fields used in research, especially testing and fitting of hearing aids, are usually simplified or reduced to fundamental physical fields, such as the free or the diffuse sound field. The concepts of such ideal conditions are easily introduced in theoretical and experimental investigations and in models for directional microphones, for example. When it comes to real-world application of hearing aids, however, the field conditions are more complex with regard to specific stationary and transient properties in room transfer functions and the corresponding impulse responses and binaural parameters. Sound fields can be categorized in outdoor rural and urban and indoor environments. Furthermore, sound fields in closed spaces of various sizes and shapes and in situations of transport in vehicles, trains, and aircrafts are compared with regard to the binaural signals. In laboratory tests, sources of uncertainties are individual differences in binaural cues and too less controlled sound field conditions. Furthermore, laboratory sound fields do not cover the variety of complex sound environments. Spatial audio formats such as higher-order ambisonics are candidates for sound field references not only in room acoustics and audio engineering but also in audiology. PMID:21676999
Effects of HearFones on speaking and singing voice quality.
Laukkanen, Anne-Maria; Mickelson, Nils Peter; Laitala, Marja; Syrjä, Tiina; Salo, Arla; Sihvo, Marketta
2004-12-01
HearFones (HF) have been designed to enhance auditory feedback during phonation. This study investigated the effects of HF (1) on sound perceivable by the subject, (2) on voice quality in reading and singing, and (3) on voice production in speech and singing at the same pitch and sound level. Test 1: Text reading was recorded with two identical microphones in the ears of a subject. One ear was covered with HF, and the other was free. Four subjects attended this test. Tests 2 and 3: A reading sample was recorded from 13 subjects and a song from 12 subjects without and with HF on. Test 4: Six females repeated [pa:p:a] in speaking and singing modes without and with HF on same pitch and sound level. Long-term average spectra were made (Tests 1-3), and formant frequencies, fundamental frequency, and sound level were measured (Tests 2 and 3). Subglottic pressure was estimated from oral pressure in [p], and simultaneously electroglottography (EGG) was registered during voicing on [a:] (Test 4). Voice quality in speech and singing was evaluated by three professional voice trainers (Tests 2-4). HF seemed to enhance sound perceivable at the whole range studied (0-8 kHz), with the greatest enhancement (up to ca 25 dB) being at 1-3 kHz and at 4-7 kHz. The subjects tended to decrease loudness with HF (when sound level was not being monitored). In more than half of the cases, voice quality was evaluated "less strained" and "better controlled" with HF. When pitch and loudness were constant, no clear differences were heard but closed quotient of the EGG signal was higher and the signal more skewed, suggesting a better glottal closure and/or diminished activity of the thyroarytenoid muscle.
Gruen, Margaret E; Case, Beth C; Foster, Melanie L; Lazarowski, Lucia; Fish, Richard E; Landsberg, Gary; DePuy, Venita; Dorman, David C; Sherman, Barbara L
2015-01-01
Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were robust to fear/anxiety responses. The results suggest that the open field sound test may be a useful method to evaluate the suitability of dogs for IED-detection training.
Gruen, Margaret E.; Case, Beth C.; Foster, Melanie L.; Lazarowski, Lucia; Fish, Richard E.; Landsberg, Gary; DePuy, Venita; Dorman, David C.; Sherman, Barbara L.
2015-01-01
Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were robust to fear/anxiety responses. The results suggest that the open field sound test may be a useful method to evaluate the suitability of dogs for IED-detection training. PMID:26273235
Psychometric Characteristics of Single-Word Tests of Children's Speech Sound Production
ERIC Educational Resources Information Center
Flipsen, Peter, Jr.; Ogiela, Diane A.
2015-01-01
Purpose: Our understanding of test construction has improved since the now-classic review by McCauley and Swisher (1984) . The current review article examines the psychometric characteristics of current single-word tests of speech sound production in an attempt to determine whether our tests have improved since then. It also provides a resource…
NASA Technical Reports Server (NTRS)
Heffner, R. J.
1998-01-01
This is the Engineering Test Report, AMSU-AL METSAT Instrument (S/N 105) Qualification Level Vibration Tests of December 1998 (S/0 605445, OC-419), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
NASA Technical Reports Server (NTRS)
Valdez, A.
2000-01-01
This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 109, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
NASA Technical Reports Server (NTRS)
Platt, R.
1999-01-01
This is the Performance Verification Report, Initial Comprehensive Performance Test Report, P/N 1331200-2-IT, S/N 105/A2, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). The specification establishes the requirements for the Comprehensive Performance Test (CPT) and Limited Performance Test (LPT) of the Advanced Microwave Sounding, Unit-A2 (AMSU-A2), referred to herein as the unit. The unit is defined on Drawing 1331200. 1.2 Test procedure sequence. The sequence in which the several phases of this test procedure shall take place is shown in Figure 1, but the sequence can be in any order.
Sound preference test in animal models of addicts and phobias.
Soga, Ryo; Shiramatsu, Tomoyo I; Kanzaki, Ryohei; Takahashi, Hirokazu
2016-08-01
Biased or too strong preference for a particular object is often problematic, resulting in addiction and phobia. In animal models, alternative forced-choice tasks have been routinely used, but such preference test is far from daily situations that addicts or phobic are facing. In the present study, we developed a behavioral assay to evaluate the preference of sounds in rodents. In the assay, several sounds were presented according to the position of free-moving rats, and quantified the sound preference based on the behavior. A particular tone was paired with microstimulation to the ventral tegmental area (VTA), which plays central roles in reward processing, to increase sound preference. The behaviors of rats were logged during the classical conditioning for six days. Consequently, some behavioral indices suggest that rats search for the conditioned sound. Thus, our data demonstrated that quantitative evaluation of preference in the behavioral assay is feasible.
[Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].
Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng
2008-12-01
In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.
NASA Technical Reports Server (NTRS)
Powell, Clemans A.; Sullivan, Brenda M.
2004-01-01
Two experiments were conducted, using sound quality engineering practices, to determine the subjective effectiveness of hypothetical active noise control systems in a range of propeller aircraft. The two tests differed by the type of judgments made by the subjects: pair comparisons in the first test and numerical category scaling in the second. Although the results of the two tests were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference.
49 CFR 325.71 - Scope of the rules in this subpart.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the sound level generated by a motor vehicle, as displayed on a sound level measurement system, during the measurement of the motor vehicle's sound level emissions at a test site which is not a standard site. (b) The purpose of adding or subtracting a correction factor is to equate the sound level reading...
49 CFR 325.71 - Scope of the rules in this subpart.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the sound level generated by a motor vehicle, as displayed on a sound level measurement system, during the measurement of the motor vehicle's sound level emissions at a test site which is not a standard site. (b) The purpose of adding or subtracting a correction factor is to equate the sound level reading...
Sound absorption study on acoustic panel from kapok fiber and egg tray
NASA Astrophysics Data System (ADS)
Kaamin, Masiri; Mahir, Nurul Syazwani Mohd; Kadir, Aslila Abd; Hamid, Nor Baizura; Mokhtar, Mardiha; Ngadiman, Norhayati
2017-12-01
Noise also known as a sound, especially one that is loud or unpleasant or that causes disruption. The level of noise can be reduced by using sound absorption panel. Currently, the market produces sound absorption panel, which use synthetic fibers that can cause harmful effects to the health of consumers. An awareness of using natural fibers from natural materials gets attention of some parties to use it as a sound absorbing material. Therefore, this study was conducted to investigate the potential of sound absorption panel using egg trays and kapok fibers. The test involved in this study was impedance tube test which aims to get sound absorption coefficient (SAC). The results showed that there was good sound absorption at low frequency from 0 Hz up to 900 Hz where the maximum absorption coefficient was 0.950 while the maximum absorption at high frequencies was 0.799. Through the noise reduction coefficient (NRC), the material produced NRC of 0.57 indicates that the materials are very absorbing. In addition, the reverberation room test was carried out to get the value of reverberation time (RT) in unit seconds. Overall this panel showed good results at low frequencies between 0 Hz up to 1500 Hz. In that range of frequency, the maximum reverberation time for the panel was 3.784 seconds compared to the maximum reverberation time for an empty room was 5.798 seconds. This study indicated that kapok fiber and egg tray as the material of absorption panel has a potential as environmental and cheap products in absorbing sound at low frequency.
Kawashima, Takayuki; Sato, Takao
2012-01-01
When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.
NASA Astrophysics Data System (ADS)
Carr, Daniel; Davies, Patricia
2015-10-01
Aircraft manufacturers are interested in designing and building a new generation of supersonic aircraft that produce shaped sonic booms of lower peak amplitude than booms created by current supersonic aircraft. To determine if the noise exposure from these "low"booms is more acceptable to communities, new laboratory testing to evaluate people's responses must occur. To guide supersonic aircraft design, objective measures that predict human response to modified sonic boom waveforms and other impulsive sounds are needed. The present research phase is focused on understanding people's reactions to booms when heard inside, and therefore includes consideration of the effects of house type and the indoor acoustic environment. A test was conducted in NASA Langley's Interior Effects Room (IER), with the collaboration of NASA Langley engineers. This test was focused on the effects of low-frequency content and of vibration, and subjects sat in a small living room environment. A second test was conducted in a sound booth at Purdue University, using similar sounds played back over earphones. The sounds in this test contained less very-low-frequency energy due to limitations in the playback, and the laboratory setting is a less natural environment. For the purpose of comparison, and to improve the robustness of the model, both sonic booms and other more familiar transient sounds were used in the tests. The design of the tests and the signals are briefly described, and the results of both tests will be presented.
NASA Technical Reports Server (NTRS)
Akers, James C.; Cooper, Beth A.
2004-01-01
NASA Glenn Research Center's Acoustical Testing Laboratory (ATL) provides a comprehensive array of acoustical testing services, including sound pressure level, sound intensity level, and sound-power-level testing per International Standards Organization (ISO)1 3744. Since its establishment in September 2000, the ATL has provided acoustic emission testing and noise control services for a variety of customers, particularly microgravity space flight hardware that must meet International Space Station acoustic emission requirements. The ATL consists of a 23- by 27- by 20-ft (height) convertible hemi/anechoic test chamber and a separate sound-attenuating test support enclosure. The ATL employs a personal-computer-based data acquisition system that provides up to 26 channels of simultaneous data acquisition with real-time analysis (ref. 4). Specialized diagnostic tools, including a scanning sound-intensity system, allow the ATL's technical staff to support its clients' aggressive low-noise design efforts to meet the space station's acoustic emission requirement. From its inception, the ATL has pursued the goal of developing a comprehensive ISO 17025-compliant quality program that would incorporate Glenn's existing ISO 9000 quality system policies as well as ATL-specific technical policies and procedures. In March 2003, the ATL quality program was awarded accreditation by the National Voluntary Laboratory Accreditation Program (NVLAP) for sound-power-level testing in accordance with ISO 3744. The NVLAP program is administered by the National Institutes of Standards and Technology (NIST) of the U.S. Department of Commerce and provides third-party accreditation for testing and calibration laboratories. There are currently 24 NVLAP-accredited acoustical testing laboratories in the United States. NVLAP accreditation covering one or more specific testing procedures conducted in accordance with established test standards is awarded upon successful completion of an intensive onsite assessment that includes proficiency testing and documentation review. The ATL NVLAP accreditation currently applies specifically to its ISO 3744 soundpower- level determination procedure (see the photograph) and supporting ISO 17025 quality system, although all ATL operations are conducted in accordance with its quality system. The ATL staff is currently developing additional procedures to adapt this quality system to the testing of space flight hardware in accordance with International Space Station acoustic emission requirements.<
NASA Technical Reports Server (NTRS)
Heffner, R.
2000-01-01
This is the Engineering Test Report, AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Test of Dec 1999/Jan 2000 (S/O 784077, OC-454), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
NASA Technical Reports Server (NTRS)
Valdez, A.
2000-01-01
This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A2, S/N 108, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
JPL-20140817-LDSDf-0001-Flying Saucer Test Flight
2014-08-17
Ian Clark, Low Density Supersonic Decelerator (LDSD) Principal Investigator, narrates balloon launch, rocket firing and parachute testing on June 28, 2014. The LDSD is a concept for slowing a spacecraft entering Mars' atmosphere at supersonic speeds. For this test, the goal was to slow the test vehicle from four times the speed of sound to 2.5 times the speed of sound.
Four odontocete species change hearing levels when warned of impending loud sound.
Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A
2018-03-01
Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
Stimulus Characteristics of Single-Word Tests of Children's Speech Sound Production
ERIC Educational Resources Information Center
Macrae, Toby
2017-01-01
Purpose: This clinical focus article provides readers with a description of the stimulus characteristics of 12 popular tests of speech sound production. Method: Using significance testing and descriptive analyses, stimulus items were compared in terms of the number of opportunities for production of all consonant singletons, clusters, and rhotic…
NASA Technical Reports Server (NTRS)
Beckwith, I. E.; Spokowski, A. J.; Harvey, W. D.; Stainback, P. C.
1975-01-01
The basic theory and sound attenuation mechanisms, the design procedures, and preliminary experimental results are presented for a small axisymmetric sound shield for supersonic wind tunnels. The shield consists of an array of small diameter rods aligned nearly parallel to the entrance flow with small gaps between the rods for boundary layer suction. Results show that at the lowest test Reynolds number (based on rod diameter) of 52,000 the noise shield reduced the test section noise by about 60 percent ( or 8 db attenuation) but no attenuation was measured for the higher range of test reynolds numbers from 73,000 to 190,000. These results are below expectations based on data reported elsewhere on a flat sound shield model. The smaller attenuation from the present tests is attributed to insufficient suction at the gaps to prevent feedback of vacuum manifold noise into the shielded test flow and to insufficient suction to prevent transition of the rod boundary layers to turbulent flow at the higher Reynolds numbers. Schlieren photographs of the flow are shown.
The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes
Gygi, Brian; Shafiro, Valeriy
2011-01-01
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about 5 percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naïve (untrained) listeners showed that this Incongruency Advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of −7.5 dB, but there is about 5 percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the Incongruency Advantage is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. PMID:21355664
The acoustic performance of double-skin facades: A design support tool for architects
NASA Astrophysics Data System (ADS)
Batungbakal, Aireen
This study assesses and validates the influence of measuring sound in the urban environment and the influence of glass facade components in reducing sound transmission to the indoor environment. Among the most reported issues affecting workspaces, increased awareness to minimize noise led building designers to reconsider the design of building envelopes and its site environment. Outdoor sound conditions, such as traffic noise, challenge designers to accurately estimate the capability of glass facades in acquiring an appropriate indoor sound quality. Indicating the density of the urban environment, field-tests acquired existing sound levels in areas of high commercial development, employment, and traffic activity, establishing a baseline for sound levels common in urban work areas. Composed from the direct sound transmission loss of glass facades simulated through INSUL, a sound insulation software, data is utilized as an informative tool correlating the response of glass facade components towards existing outdoor sound levels of a project site in order to achieve desired indoor sound levels. This study progresses to link the disconnection in validating the acoustic performance of glass facades early in a project's design, from conditioned settings such as field-testing and simulations to project completion. Results obtained from the study's facade simulations and facade comparison supports that acoustic comfort is not limited to a singular solution, but multiple design options responsive to its environment.
NASA Astrophysics Data System (ADS)
Su, Guoshao; Shi, Yanjiong; Feng, Xiating; Jiang, Jianqing; Zhang, Jie; Jiang, Quan
2018-02-01
Rockbursts are markedly characterized by the ejection of rock fragments from host rocks at certain speeds. The rockburst process is always accompanied by acoustic signals that include acoustic emissions (AE) and sounds. A deep insight into the evolutionary features of AE and sound signals is important to improve the accuracy of rockburst prediction. To investigate the evolutionary features of AE and sound signals, rockburst tests on granite rock specimens under true-triaxial loading conditions were performed using an improved rockburst testing system, and the AE and sounds during rockburst development were recorded and analyzed. The results show that the evolutionary features of the AE and sound signals were obvious and similar. On the eve of a rockburst, a `quiescent period' could be observed in both the evolutionary process of the AE hits and the sound waveform. Furthermore, the time-dependent fractal dimensions of the AE hits and sound amplitude both showed a tendency to continuously decrease on the eve of the rockbursts. In addition, on the eve of the rockbursts, the main frequency of the AE and sound signals both showed decreasing trends, and the frequency spectrum distributions were both characterized by low amplitudes, wide frequency bands and multiple peak shapes. Thus, the evolutionary features of sound signals on the eve of rockbursts, as well as that of AE signals, can be used as beneficial information for rockburst prediction.
Twelve-Month-Olds Privilege Words over Other Linguistic Sounds in an Associative Learning Task
ERIC Educational Resources Information Center
MacKenzie, Heather; Graham, Susan A.; Curtin, Suzanne
2011-01-01
We examined whether 12-month-old infants privilege words over other linguistic stimuli in an associative learning task. Sixty-four infants were presented with sets of either word-object, communicative sound-object, or consonantal sound-object pairings until they habituated. They were then tested on a "switch" in the sound to determine whether they…
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Jaeger, Stephen M.; Hayes, Julie A.; Allen, Christopher S.
2002-01-01
A recessed, 42-inch deep acoustic lining has been designed and installed in the 40- by 80- Foot Wind Tunnel (40x80) test section to greatly improve the acoustic quality of the facility. This report describes the test section acoustic performance as determined by a detailed static calibration-all data were acquired without wind. Global measurements of sound decay from steady noise sources showed that the facility is suitable for acoustic studies of jet noise or similar randomly generated sound. The wall sound absorption, size of the facility, and averaging effects of wide band random noise all tend to minimize interference effects from wall reflections. The decay of white noise with distance was close to free field above 250 Hz. However, tonal sound data from propellers and fans, for example, will have an error band to be described that is caused by the sensitivity of tones to even weak interference. That error band could be minimized by use of directional instruments such as phased microphone arrays. Above 10 kHz, air absorption began to dominate the sound field in the large test section, reflections became weaker, and the test section tended toward an anechoic environment as frequency increased.
High Definition Sounding System Test and Integration with NASA Atmospheric Science Program Aircraft
2013-09-30
of the High Definition Sounding System (HDSS) on NASA high altitude Airborne Science Program platforms, specifically the NASA P-3 and NASA WB-57. When...demonstrate the system reliability in a Global Hawk’s 62000’ altitude regime of thin air and very cold temperatures. APPROACH: Mission Profile One or more WB...57 test flights will prove airworthiness and verify the High Definition Sounding System (HDSS) is safe and functional at high altitudes , essentially
Jones, Heath G; Kan, Alan; Litovsky, Ruth Y
2016-01-01
This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.
Sound absorption of low-temperature reusable surface insulation candidate materials
NASA Technical Reports Server (NTRS)
Johnston, J. D.
1974-01-01
Sound absorption data from tests of four candidate low-temperature reusable surface insulation materials are presented. Limitations on the use of the data are discussed, conclusions concerning the effective absorption of the materials are drawn, and the relative significance to Vibration and Acoustic Test Facility test planning of the absorption of each material is assessed.
Autistic traits and attention to speech: Evidence from typically developing individuals.
Korhonen, Vesa; Werner, Stefan
2017-04-01
Individuals with autism spectrum disorder have a preference for attending to non-speech stimuli over speech stimuli. We are interested in whether non-speech preference is only a feature of diagnosed individuals, and whether we can we test implicit preference experimentally. In typically developed individuals, serial recall is disrupted more by speech stimuli than by non-speech stimuli. Since behaviour of individuals with autistic traits resembles that of individuals with autism, we have used serial recall to test whether autistic traits influence task performance during irrelevant speech sounds. The errors made on the serial recall task during speech or non-speech sounds were counted as a measure of speech or non-speech preference in relation to no sound condition. We replicated the serial order effect and found the speech to be more disruptive than the non-speech sounds, but were unable to find any associations between the autism quotient scores and the non-speech sounds. Our results may indicate a learnt behavioural response to speech sounds.
Effects of sound level fluctuations on annoyance caused by aircraft-flyover noise
NASA Technical Reports Server (NTRS)
Mccurdy, D. A.
1979-01-01
A laboratory experiment was conducted to determine the effects of variations in the rate and magnitude of sound level fluctuations on the annoyance caused by aircraft-flyover noise. The effects of tonal content, noise duration, and sound pressure level on annoyance were also studied. An aircraft-noise synthesis system was used to synthesize 32 aircraft-flyover noise stimuli representing the factorial combinations of 2 tone conditions, 2 noise durations, 2 sound pressure levels, 2 level fluctuation rates, and 2 level fluctuation magnitudes. Thirty-two test subjects made annoyance judgements on a total of 64 stimuli in a subjective listening test facility simulating an outdoor acoustic environment. Variations in the rate and magnitude of level fluctuations were found to have little, if any, effect on annoyance. Tonal content, noise duration, sound pressure level, and the interaction of tonal content with sound pressure level were found to affect the judged annoyance significantly. The addition of tone corrections and/or duration corrections significantly improved the annoyance prediction ability of noise rating scales.
NASA Astrophysics Data System (ADS)
Sak, Mark; Duric, Neb; Littrup, Peter; Sherman, Mark; Gierach, Gretchen
2017-03-01
Ultrasound tomography (UST) is an emerging modality that can offer quantitative measurements of breast density. Recent breakthroughs in UST image reconstruction involve the use of a waveform reconstruction as opposed to a raybased reconstruction. The sound speed (SS) images that are created using the waveform reconstruction have a much higher image quality. These waveform images offer improved resolution and contrasts between regions of dense and fatty tissues. As part of a study that was designed to assess breast density changes using UST sound speed imaging among women undergoing tamoxifen therapy, UST waveform sound speed images were then reconstructed for a subset of participants. These initial results show that changes to the parenchymal tissue can more clearly be visualized when using the waveform sound speed images. Additional quantitative testing of the waveform images was also started to test the hypothesis that waveform sound speed images are a more robust measure of breast density than ray-based reconstructions. Further analysis is still needed to better understand how tamoxifen affects breast tissue.
Castro-Camacho, Wendy; Peñaloza-López, Yolanda; Pérez-Ruiz, Santiago J; García-Pedroza, Felipe; Padilla-Ortiz, Ana L; Poblano, Adrián; Villarruel-Rivas, Concepción; Romero-Díaz, Alfredo; Careaga-Olvera, Aidé
2015-04-01
Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o), under reverberant and no-reverberant conditions; correct answers were compared. Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except -90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.
Development of the mathematical model for design and verification of acoustic modal analysis methods
NASA Astrophysics Data System (ADS)
Siner, Alexander; Startseva, Maria
2016-10-01
To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.
Model-based synthesis of aircraft noise to quantify human perception of sound quality and annoyance
NASA Astrophysics Data System (ADS)
Berckmans, D.; Janssens, K.; Van der Auweraer, H.; Sas, P.; Desmet, W.
2008-04-01
This paper presents a method to synthesize aircraft noise as perceived on the ground. The developed method gives designers the opportunity to make a quick and economic evaluation concerning sound quality of different design alternatives or improvements on existing aircraft. By presenting several synthesized sounds to a jury, it is possible to evaluate the quality of different aircraft sounds and to construct a sound that can serve as a target for future aircraft designs. The combination of using a sound synthesis method that can perform changes to a recorded aircraft sound together with executing jury tests allows to quantify the human perception of aircraft noise.
Light aircraft sound transmission studies - Noise reduction model
NASA Technical Reports Server (NTRS)
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1987-01-01
Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.
Improving the Reliability of Tinnitus Screening in Laboratory Animals.
Jones, Aikeen; May, Bradford J
2017-02-01
Behavioral screening remains a contentious issue for animal studies of tinnitus. Most paradigms base a positive tinnitus test on an animal's natural tendency to respond to the "sound" of tinnitus as if it were an actual sound. As a result, animals with tinnitus are expected to display sound-conditioned behaviors when no sound is present or to miss gaps in background sounds because tinnitus "fills in the gap." Reliable confirmation of the behavioral indications of tinnitus can be problematic because the reinforcement contingencies of conventional discrimination tasks break down an animal's tendency to group tinnitus with sound. When responses in silence are rewarded, animals respond in silence regardless of their tinnitus status. When responses in silence are punished, animals stop responding. This study introduces stimulus classification as an alternative approach to tinnitus screening. Classification procedures train animals to respond to the common perceptual features that define a group of sounds (e.g., high pitch or narrow bandwidth). Our procedure trains animals to drink when they hear tinnitus and to suppress drinking when they hear other sounds. Animals with tinnitus are revealed by their tendency to drink in the presence of unreinforced probe sounds that share the perceptual features of the tinnitus classification. The advantages of this approach are illustrated by taking laboratory rats through a testing sequence that includes classification training, the experimental induction of tinnitus, and postinduction screening. Behavioral indications of tinnitus are interpreted and then verified by simulating a known tinnitus percept with objective sounds.
ERIC Educational Resources Information Center
Tiernan, Kristine N.; Schenk, Kelli; Swadberg, Danielle; Shimonova, Marianna; Schollaert, Daniel; Boorkman, Patti; Cherrier, Monique M.
2004-01-01
The validity and reliability of a novel route learning test were examined to assess the effectiveness of its use in evaluating spatial memory in healthy older adults and patients with Alzheimer's disease (AD). The Puget Sound Route Learning Test was significantly correlated with an existing measure of cognitive ability, the Dementia Rating Scale.…
Relation of sound intensity and accuracy of localization.
Farrimond, T
1989-08-01
Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.
Nair, Erika L; Sousa, Rhonda; Wannagot, Shannon
Guidelines established by the AAA currently recommend behavioral testing when fitting frequency modulated (FM) systems to individuals with cochlear implants (CIs). A protocol for completing electroacoustic measures has not yet been validated for personal FM systems or digital modulation (DM) systems coupled to CI sound processors. In response, some professionals have used or altered the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting FM systems to CI sound processors. More recently steps were outlined in a proposed protocol. The purpose of this research is to review and compare the electroacoustic test measures outlined in a 2013 article by Schafer and colleagues in the Journal of the American Academy of Audiology titled "A Proposed Electroacoustic Test Protocol for Personal FM Receivers Coupled to Cochlear Implant Sound Processors" to the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting DM systems to CI users. Electroacoustic measures were conducted on 71 CI sound processors and Phonak Roger DM systems using a proposed protocol and an adapted AAA protocol. Phonak's recommended default receiver gain setting was used for each CI sound processor manufacturer and adjusted if necessary to achieve transparency. Electroacoustic measures were conducted on Cochlear and Advanced Bionics (AB) sound processors. In this study, 28 Cochlear Nucleus 5/CP810 sound processors, 26 Cochlear Nucleus 6/CP910 sound processors, and 17 AB Naida CI Q70 sound processors were coupled in various combinations to Phonak Roger DM dedicated receivers (25 Phonak Roger 14 receivers-Cochlear dedicated receiver-and 9 Phonak Roger 17 receivers-AB dedicated receiver) and 20 Phonak Roger Inspiro transmitters. Employing both the AAA and the Schafer et al protocols, electroacoustic measurements were conducted with the Audioscan Verifit in a clinical setting on 71 CI sound processors and Phonak Roger DM systems to determine transparency and verify FM advantage, comparing speech inputs (65 dB SPL) in an effort to achieve equal outputs. If transparency was not achieved at Phonak's recommended default receiver gain, adjustments were made to the receiver gain. The integrity of the signal was monitored with the appropriate manufacturer's monitor earphones. Using the AAA hearing aid protocol, 50 of the 71 CI sound processors achieved transparency, and 59 of the 71 CI sound processors achieved transparency when using the proposed protocol at Phonak's recommended default receiver gain. After the receiver gain was adjusted, 3 of 21 CI sound processors still did not meet transparency using the AAA protocol, and 2 of 12 CI sound processors still did not meet transparency using the Schafer et al proposed protocol. Both protocols were shown to be effective in taking reliable electroacoustic measurements and demonstrate transparency. Both protocols are felt to be clinically feasible and to address the needs of populations that are unable to reliably report regarding the integrity of their personal DM systems. American Academy of Audiology
Ultrasound transmission measurements for tensile strength evaluation of tablets.
Simonaho, Simo-Pekka; Takala, T Aleksi; Kuosmanen, Marko; Ketolainen, Jarkko
2011-05-16
Ultrasound transmission measurements were performed to evaluate the tensile strength of tablets. Tablets consisting of one ingredient were compressed from dibasic calcium phosphate dehydrate, two grades of microcrystalline cellulose and two grades of lactose monohydrate powders. From each powder, tablets with five different tensile strengths were directly compressed. Ultrasound transmission measurements were conducted on every tablet at frequencies of 2.25 MHz, 5 MHz and 10 MHz and the speed of sound was calculated from the acquired waveforms. The tensile strength of the tablets was determined using a diametrical mechanical testing machine and compared to the calculated speed of sound values. It was found that the speed of sound increased with the tensile strength for the tested excipients. There was a good correlation between the speed of sound and tensile strength. Moreover, based on the statistical tests, the groups with different tensile strengths can be differentiated from each other by measuring the speed of sound. Thus, the ultrasound transmission measurement technique is a potentially useful method for non-destructive and fast evaluation of the tensile strength of tablets. Copyright © 2011 Elsevier B.V. All rights reserved.
Investigating a compact phantom and setup for testing body sound transducers
Mansy, Hansen A; Grahe, Joshua; Royston, Thomas J; Sandler, Richard H
2011-01-01
Contact transducers are a key element in experiments involving body sounds. The characteristics of these devices are often not known with accuracy. There are no standardized calibration setups or procedures for testing these sensors. This study investigated the characteristics of a new computer-controlled sound source phantom for testing sensors. Results suggested that sensors with different sizes require special phantom requirements. The effectiveness of certain approaches on increasing the spatial and spectral uniformity of the phantom surface signal was studied. Non-uniformities >20 dB were removable, which can be particularly helpful in comparing the characteristics of different size sensors more accurately. PMID:21496795
Articulation of sounds in Serbian language in patients who learned esophageal speech successfully.
Vekić, Maja; Veselinović, Mila; Mumović, Gordana; Mitrović, Slobodan M
2014-01-01
Articulation of pronounced sounds during the training and subsequent use of esophageal speech is very important because it contributes significantly to intelligibility and aesthetics of spoken words and sentences, as well as of speech and language itself. The aim of this research was to determine the quality of articulation of sounds of Serbian language by groups of sounds in patients who had learned esophageal speech successfully as well as the effect of age and tooth loss on the quality of articulation. This retrospective-prospective study included 16 patients who had undergone total laryngectomy. Having completed the rehabilitation of speech, these patient used esophageal voice and speech. The quality of articulation was tested by the "Global test of articulation." Esophageal speech was rated with grade 5, 4 and 3 in 62.5%, 31.3% and one patient, respectively. Serbian was the native language of all the patients. The study included 30 sounds of Serbian language in 16 subjects (480 total sounds). Only two patients (12.5%) articulated all sounds properly, whereas 87.5% of them had incorrect articulation. The articulation of affricates and fricatives, especially sound /h/ from the group of the fricatives, was found to be the worst in the patients who had successfully mastered esophageal speech. The age and the tooth loss of patients who have mastered esophageal speech do not affect the articulation of sounds in Serbian language.
Psychometric characteristics of single-word tests of children's speech sound production.
Flipsen, Peter; Ogiela, Diane A
2015-04-01
Our understanding of test construction has improved since the now-classic review by McCauley and Swisher (1984). The current review article examines the psychometric characteristics of current single-word tests of speech sound production in an attempt to determine whether our tests have improved since then. It also provides a resource that clinicians may use to help them make test selection decisions for their particular client populations. Ten tests published since 1990 were reviewed to determine whether they met the 10 criteria set out by McCauley and Swisher (1984), as well as 7 additional criteria. All of the tests reviewed met at least 3 of McCauley and Swisher's (1984) original criteria, and 9 of 10 tests met at least 5 of them. Most of the tests met some of the additional criteria as well. The state of the art for single-word tests of speech sound production in children appears to have improved in the last 30 years. There remains, however, room for improvement.
Nike-Cajun Sounding Rocket with University of Iowa Payload
1959-05-22
L59-3802 Nike-Cajun sounding rocket with University of Iowa payload on launcher at Wallops for flight test, May 20, 1959. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 698.
A neurally inspired musical instrument classification system based upon the sound onset.
Newton, Michael J; Smith, Leslie S
2012-06-01
Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.
Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.
Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi
2015-08-01
To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Reed, Amanda C.; Centanni, Tracy M.; Borland, Michael S.; Matney, Chanel J.; Engineer, Crystal T.; Kilgard, Michael P.
2015-01-01
Objectives Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. Design Sixteen female Sprague–Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. Results Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. Conclusions These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies. PMID:25072238
Potential sound production by a deep-sea fish
NASA Astrophysics Data System (ADS)
Mann, David A.; Jarvis, Susan M.
2004-05-01
Swimbladder sonic muscles of deep-sea fishes were described over 35 years ago. Until now, no recordings of probable deep-sea fish sounds have been published. A sound likely produced by a deep-sea fish has been isolated and localized from an analysis of acoustic recordings made at the AUTEC test range in the Tongue of the Ocean, Bahamas, from four deep-sea hydrophones. This sound is typical of a fish sound in that it is pulsed and relatively low frequency (800-1000 Hz). Using time-of-arrival differences, the sound was localized to 548-696-m depth, where the bottom was 1620 m. The ability to localize this sound in real-time on the hydrophone range provides a great advantage for being able to identify the sound-producer using a remotely operated vehicle.
Schouten, Ben; Troje, Nikolaus F.; Vroomen, Jean; Verfaillie, Karl
2011-01-01
Background The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps. Methodology/Principal Findings In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds. Conclusions/Significance The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws. PMID:21373181
Auditory-Oral Matching Behavior in Newborns
ERIC Educational Resources Information Center
Chen, Xin; Striano, Tricia; Rakoczy, Hannes
2004-01-01
Twenty-five newborn infants were tested for auditory-oral matching behavior when presented with the consonant sound /m/ and the vowel sound /a/--a precursor behavior to vocal imitation. Auditory-oral matching behavior by the infant was operationally defined as showing the mouth movement appropriate for producing the model sound just heard (mouth…
NASA Astrophysics Data System (ADS)
Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.
2016-08-01
Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.
NASA Astrophysics Data System (ADS)
Eshach, Haim
2014-06-01
This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound has material properties, and sound has process properties. The final SCII consists of 71 statements that respondents rate as either true or false and also indicate their confidence on a five-point scale. Administration to 355 middle school students resulted in a Cronbach alpha of 0.906, suggesting a high reliability. In addition, the average percentage of students' answers to statements that associate sound with material properties is significantly higher than the average percentage of statements associating sound with process properties (p <0.001). The SCII is a valid and reliable tool that can be used to determine students' conceptions of sound.
Anatomical Correlates of Non-Verbal Perception in Dementia Patients
Lin, Pin-Hsuan; Chen, Hsiu-Hui; Chen, Nai-Ching; Chang, Wen-Neng; Huang, Chi-Wei; Chang, Ya-Ting; Hsu, Shih-Wei; Hsu, Che-Wei; Chang, Chiung-Chih
2016-01-01
Purpose: Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. Methods: To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer’s dementia (AD), 15 with behavior variant fronto-temporal dementia (bv-FTD), 14 with semantic dementia (SD) were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM) was used to compare and correlated the volumetric measures with task scores. Results: The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Conclusions: Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds are mediated by distinct neural circuits. PMID:27630558
Anatomical Correlates of Non-Verbal Perception in Dementia Patients.
Lin, Pin-Hsuan; Chen, Hsiu-Hui; Chen, Nai-Ching; Chang, Wen-Neng; Huang, Chi-Wei; Chang, Ya-Ting; Hsu, Shih-Wei; Hsu, Che-Wei; Chang, Chiung-Chih
2016-01-01
Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer's dementia (AD), 15 with behavior variant fronto-temporal dementia (bv-FTD), 14 with semantic dementia (SD) were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM) was used to compare and correlated the volumetric measures with task scores. The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds are mediated by distinct neural circuits.
Effects of Sound on the Behavior of Wild, Unrestrained Fish Schools.
Roberts, Louise; Cheesman, Samuel; Hawkins, Anthony D
2016-01-01
To assess and manage the impact of man-made sounds on fish, we need information on how behavior is affected. Here, wild unrestrained pelagic fish schools were observed under quiet conditions using sonar. Fish were exposed to synthetic piling sounds at different levels using custom-built sound projectors, and behavioral changes were examined. In some cases, the depth of schools changed after noise playback; full dispersal of schools was also evident. The methods we developed for examining the behavior of unrestrained fish to sound exposure have proved successful and may allow further testing of the relationship between responsiveness and sound level.
40 CFR 204.55-2 - Requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... § 204.54. (c)(1) In lieu of testing compressors of every configuration, as described in paragraph (b) of... category which emits the highest sound level in dBA based on best technical judgment, emission test data... section as having the highest sound level (estimated or actual) within the category. (iv) Compliance of...
40 CFR 204.55-2 - Requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... § 204.54. (c)(1) In lieu of testing compressors of every configuration, as described in paragraph (b) of... category which emits the highest sound level in dBA based on best technical judgment, emission test data... section as having the highest sound level (estimated or actual) within the category. (iv) Compliance of...
20. FIFTH FLOOR BLDG. 28A, VIEW SOUND TEST ROOMS LOOKING ...
20. FIFTH FLOOR BLDG. 28A, VIEW SOUND TEST ROOMS LOOKING NORTHEAST. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT
22. FIFTH FLOOR BLDG. 28A, DETAIL DOUBLE DOORS SOUND TEST ...
22. FIFTH FLOOR BLDG. 28A, DETAIL DOUBLE DOORS SOUND TEST ROOM LOOKING NORTH. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT
21. FIFTH FLOOR BLDG. 28A, ELEVATION WEST END SOUND TEST ...
21. FIFTH FLOOR BLDG. 28A, ELEVATION WEST END SOUND TEST ROOM. - Fafnir Bearing Plant, Bounded on North side by Myrtle Street, on South side by Orange Street, on East side by Booth Street & on West side by Grove Street, New Britain, Hartford County, CT
40 CFR 205.54-1 - Low speed sound emission test procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Low speed sound emission test procedures. 205.54-1 Section 205.54-1 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED.... Operating manuals or other literature furnished by the instrument manufacturer shall be referred to for both...
40 CFR 205.54-1 - Low speed sound emission test procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Low speed sound emission test procedures. 205.54-1 Section 205.54-1 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Medium and Heavy Trucks § 205...
Psychoacoustic Testing of Modulated Blade Spacing for Main Rotors
NASA Technical Reports Server (NTRS)
Edwards, Bryan; Booth, Earl R., Jr. (Technical Monitor)
2002-01-01
Psychoacoustic testing of simulated helicopter main rotor noise is described, and the subjective results are presented. The objective of these tests was to evaluate the potential acoustic benefits of main rotors with modulated (uneven) blade spacing. Sound simulations were prepared for six main rotor configurations. A baseline 4-blade main rotor with regular blade spacing was based on the Bell Model 427 helicopter. A 5-blade main rotor with regular spacing was designed to approximate the performance of the 427, but at reduced tipspeed. Four modulated rotors - one with "optimum" spacing and three alternate configurations - were derived from the 5 bladed regular spacing rotor. The sounds were played to 2 subjects at a time, with care being taken in the speaker selection and placement to ensure that the sounds were identical for each subject. A total of 40 subjects participated. For each rotor configuration, the listeners were asked to evaluate the sounds in terms of noisiness. The test results indicate little to no "annoyance" benefit for the modulated blade spacing. In general, the subjects preferred the sound of the 5-blade regular spaced rotor over any of the modulated ones. A conclusion is that modulated blade spacing is not a promising design feature to reduce the annoyance for helicopter main rotors.
Loss of urban forest canopy and the related effects on soundscape and human directed attention
NASA Astrophysics Data System (ADS)
Laverne, Robert James Paul
The specific questions addressed in this research are: Will the loss of trees in residential neighborhoods result in a change to the local soundscape? The investigation of this question leads to a related inquiry: Do the sounds of the environment in which a person is present affect their directed attention?. An invasive insect pest, the Emerald Ash Borer (Agrilus planipennis ), is killing millions of ash trees (genus Fraxinus) throughout North America. As the loss of tree canopy occurs, urban ecosystems change (including higher summer temperatures, more stormwater runoff, and poorer air quality) causing associated changes to human physical and mental health. Previous studies suggest that conditions in urban environments can result in chronic stress in humans and fatigue to directed attention, which is the ability to focus on tasks and to pay attention. Access to nature in cities can help refresh directed attention. The sights and sounds associated with parks, open spaces, and trees can serve as beneficial counterbalances to the irritating conditions associated with cities. This research examines changes to the quantity and quality of sounds in Arlington Heights, Illinois. A series of before-and-after sound recordings were gathered as trees died and were removed between 2013 and 2015. Comparison of recordings using the Raven sound analysis program revealed significant differences in some, but not all measures of sound attributes as tree canopy decreased. In general, more human-produced mechanical sounds (anthrophony) and fewer sounds associated with weather (geophony) were detected. Changes in sounds associated with animals (biophony) varied seasonally. Monitoring changes in the proportions of anthrophony, biophony and geophony can provide insight into changes in biodiversity, environmental health, and quality of life for humans. Before-tree-removal and after-tree-removal sound recordings served as the independent variable for randomly-assigned human volunteers as they performed the Stroop Test and the Necker Cube Pattern Control test to measure directed attention. The sound treatments were not found to have significant effects on the directed attention test scores. Future research is needed to investigate the characteristics of urban soundscapes that are detrimental or potentially conducive to human cognitive functioning.
Structural and Acoustic Damping Characteristics of Polyimide Microspheres
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Park, Junhong
2005-01-01
A broad range of tests have been performed to evaluate the capability of tiny lightweight polyimide spheres to reduce sound and vibration. The types of testing includes impedance tube measurement of propagation constant, sound power insertion loss for single and double wall systems, particle frame wave characterization and beam vibration reduction. The tests were performed using spheres made of two types of polyimide and with varying diameter. Baseline results were established using common noise reduction treatment materials such as fiberglass and foam. The spheres were difficult to test due to their inherent mobility. Most tests required some adaptation to contain the spheres. One test returned obvious non-linear behavior, a result which has come to be expected for treatments of this type. The polyimide spheres are found to be a competent treatment for both sound and vibration energy with the reservation that more work needs to be done to better characterize the non-linear behavior.
Marine biomass: New York State species and site studies. Annual report December 1982-November 1983
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKay, L.B.
1983-11-01
The Energy Authority has been conducting tests since 1979 in an effort to develop a feasible system for the production in Long Island Sound of marine biomass from indigenous macroalgae for economically competitive conversion to synthetic natural gas. During 1983 this goal was brought closer to realization when a 120 x 41 foot seaweed test farm was placed in 60 feet of water in Long Island Sound. The structure is basically a flexible wire cable and rope grid, buoyed at the surface and moored to the bottom of the Sound. It is suitable for cultivating seaweeds that attach themselves tomore » surfaces such as the brown kelp, Laminaria saccharina. The test farm design was chosen from among four previously developed by the engineering team. An ongoing program is taking place in Long Island Sound to test the strength of the structure and to obtain information on plants growing in the structure. The program will test the strain on the lines, corrosion on metal parts, and available light and temperature at various times. Also, since 1982, bioengineering tests have focused on biofouling experiments, and seaweed and rope strength tests. The report also includes discussion of a laboratory research program focused on seeding techniques and strain selection.« less
SAFETY ON UNTRUSTED NETWORK DEVICES (SOUND)
2017-10-10
in the Cyber & Communication Technologies Group , but not on the SOUND project, would review the code, design and perform attacks against a live...3.5 Red Team As part of our testing , we planned to conduct Red Team assessments. In these assessments, a group of engineers from BAE who worked...developed under the DARPA CRASH program and SOUND were designed to be companion projects. SAFE focused on the processor and the host, SOUND focused on
Design and development of multipurpose Kundt’s tube as physics learning media
NASA Astrophysics Data System (ADS)
Nursulistiyo, E.
2018-03-01
Research had been conducted to develop Multipurpose Kundt's tube as a physics learning media. Research background was the absence of sound waves visualization to improve the understanding of learners. The purposes of this research were to develop Multipurpose Kundt’s tube as physics learning media and to test its feasibility. The developed tool was tested to find the speed of sound in the air, showing the double slit interference phenomenon of the sound, and show the temperature changes in the cold and heat reservoirs in the thermoacoustic process. The development step that had been used was Preliminary Study, Development, Field Test, and Dissemination or known as PDFD Model. On the implementation, the dissemination process was not done. The test was done by the experts, peers, and college students to find out the media feasibility level. The speed of sound in the air which was measured using Multipurpose Kundt’s tube obtained v ± Δv = 263 ± 24 m/s with the closeness value of 76.63% closer to the theoretical value. Also, it was founded a calibration factor of 1.32. The tool was able to show sound waves on the open-end tube. The value of the distance between minimum and maximum interferences between the experimental results compared to theory was almost the same, so it was concluded that the phenomenon of double-slit interference of the sound could be shown by the tool. The thermoacoustic phenomenon could be observed and gave maximum temperature 31.4°C in the hot reservoir, and minimum temperature 24°C in the cold reservoir at the frequency of 119 Hz. Temperature differences obtained 7.4°C. The result of the feasibility test obtained the average result of 88.23 in a “Very Good” category.
Ares I Scale Model Acoustic Test Above Deck Water Sound Suppression Results
NASA Technical Reports Server (NTRS)
Counter, Douglas D.; Houston, Janice D.
2011-01-01
The Ares I Scale Model Acoustic Test (ASMAT) program test matrix was designed to determine the acoustic reduction for the Liftoff acoustics (LOA) environment with an above deck water sound suppression system. The scale model test can be used to quantify the effectiveness of the water suppression system as well as optimize the systems necessary for the LOA noise reduction. Several water flow rates were tested to determine which rate provides the greatest acoustic reductions. Preliminary results are presented.
NASA Technical Reports Server (NTRS)
Platt, R.
1998-01-01
This is the Performance Verification Report. the process specification establishes the requirements for the comprehensive performance test (CPT) and limited performance test (LPT) of the earth observing system advanced microwave sounding unit-A2 (EOS/AMSU-A2), referred to as the unit. The unit is defined on drawing 1356006.
1959-11-10
L59-7932 First University of Michigan Strongarm sounding rocket on launcher at Wallops for test, November 10, 1959. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 701.E5-188 Shop and Launcher Pictures
Developing an Achievement Test for the Subject of Sound in Science Education
ERIC Educational Resources Information Center
Sözen, Merve; Bolat, Mualla
2016-01-01
The purpose of this study is to develop an achievement test which includes the basic concepts about the subject of sound and its properties in middle school science lessons and which at the same time aims to reveal the alternative concepts that the students already have. During the process of the development of the test, studies in the field and…
A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene
Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling
2016-01-01
A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239
Kagaya, Yutaka; Tabata, Masao; Arata, Yutaro; Kameoka, Junichi; Ishii, Seiichi
2017-08-01
Effectiveness of simulation-based education in cardiac auscultation training is controversial, and may vary among a variety of heart sounds and murmurs. We investigated whether a single auscultation training class using a cardiology patient simulator for medical students provides competence required for clinical clerkship, and whether students' proficiency after the training differs among heart sounds and murmurs. A total of 324 fourth-year medical students (93-117/year for 3 years) were divided into groups of 6-8 students; each group participated in a three-hour training session using a cardiology patient simulator. After a mini-lecture and facilitated training, each student took two different tests. In the first test, they tried to identify three sounds of Category A (non-split, respiratory split, and abnormally wide split S2s) in random order, after being informed that they were from Category A. They then did the same with sounds of Category B (S3, S4, and S3+S4) and Category C (four heart murmurs). In the second test, they tried to identify only one from each of the three categories in random order without any category information. The overall accuracy rate declined from 80.4% in the first test to 62.0% in the second test (p<0.0001). The accuracy rate of all the heart murmurs was similar in the first (81.3%) and second tests (77.5%). That of all the heart sounds (S2/S3/S4) decreased from 79.9% to 54.3% in the second test (p<0.0001). The individual accuracy rate decreased in the second test as compared with the first test in all three S2s, S3, and S3+S4 (p<0.0001). Medical students may be less likely to correctly identify S2/S3/S4 as compared with heart murmurs in a situation close to clinical setting even immediately after training. We may have to consider such a characteristic of students when we provide them with cardiac auscultation training. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Kok, Annebelle C M; Engelberts, J Pamela; Kastelein, Ronald A; Helder-Hoek, Lean; Van de Voorde, Shirley; Visser, Fleur; Slabbekoorn, Hans
2018-02-01
The continuing rise in underwater sound levels in the oceans leads to disturbance of marine life. It is thought that one of the main impacts of sound exposure is the alteration of foraging behaviour of marine species, for example by deterring animals from a prey location, or by distracting them while they are trying to catch prey. So far, only limited knowledge is available on both mechanisms in the same species. The harbour porpoise (Phocoena phocoena) is a relatively small marine mammal that could quickly suffer fitness consequences from a reduction of foraging success. To investigate effects of anthropogenic sound on their foraging efficiency, we tested whether experimentally elevated sound levels would deter two captive harbour porpoises from a noisy pool into a quiet pool (Experiment 1) and reduce their prey-search performance, measured as prey-search time in the noisy pool (Experiment 2). Furthermore, we tested the influence of the temporal structure and amplitude of the sound on the avoidance response of both animals. Both individuals avoided the pool with elevated sound levels, but they did not show a change in search time for prey when trying to find a fish hidden in one of three cages. The combination of temporal structure and SPL caused variable patterns. When the sound was intermittent, increased SPL caused increased avoidance times. When the sound was continuous, avoidance was equal for all SPLs above a threshold of 100 dB re 1 μPa. Hence, we found no evidence for an effect of sound exposure on search efficiency, but sounds of different temporal patterns did cause spatial avoidance with distinct dose-response patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.
Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions
Porter, Benjamin A.; Rosenthal, Tara R.; Ranasinghe, Kamalini G.; Kilgard, Michael P.
2011-01-01
Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. PMID:21167211
Absolute auditory threshold: testing the absolute.
Heil, Peter; Matysiak, Artur
2017-11-02
The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Sigmundsson, Hermundur; Eriksen, Adrian D.; Ofteland, Greta Storm; Haga, Monika
2017-01-01
This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects. PMID:28951726
Teaching Acoustic Properties of Materials in Secondary School: Testing Sound Insulators
ERIC Educational Resources Information Center
Hernandez, M. I.; Couso, D.; Pinto, R.
2011-01-01
Teaching the acoustic properties of materials is a good way to teach physics concepts, extending them into the technological arena related to materials science. This article describes an innovative approach for teaching sound and acoustics in combination with sound insulating materials in secondary school (15-16-year-old students). Concerning the…
Digital servo control of random sound fields
NASA Technical Reports Server (NTRS)
Nakich, R. B.
1973-01-01
It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.
49 CFR 325.79 - Application of correction factors.
Code of Federal Regulations, 2011 CFR
2011-10-01
... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...
49 CFR 325.79 - Application of correction factors.
Code of Federal Regulations, 2010 CFR
2010-10-01
... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...
Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard
2010-02-01
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
Sound absorption of a porous material with a perforated facing at high sound pressure levels
NASA Astrophysics Data System (ADS)
Peng, Feng
2018-07-01
A semi-empirical model is proposed to predict the sound absorption of an acoustical unit consisting of a rigid-porous material layer with a perforated facing under the normal incidence at high sound pressure levels (SPLs) of pure tones. The nonlinearity of the perforated facing and the porous material, and the interference between them are considered in the model. The sound absorptive performance of the acoustical unit is tested at different incident SPLs and in three typical configurations: 1) when the perforated panel (PP) directly contacts with the porous layer, 2) when the PP is separated from the porous layer by an air gap and 3) when an air cavity is set between the porous material and the hard backing wall. The test results agree well with the corresponding theoretical predictions. Moreover, the results show that the interference effect is correlated to the width of the air gap between the PP and the porous layer, which alters not only the linear acoustic impedance but also the nonlinear acoustic impedance of the unit and hence its sound absorptive properties.
Video indexing based on image and sound
NASA Astrophysics Data System (ADS)
Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose
1997-10-01
Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.
Popov, Vladimir V; Supin, Alexander Ya; Rozhnov, Viatcheslav V; Nechaev, Dmitry I; Sysueva, Evgenia V
2014-05-15
The influence of fatiguing sound level and duration on post-exposure temporary threshold shift (TTS) was investigated in two beluga whales (Delphinapterus leucas). The fatiguing sound was half-octave noise with a center frequency of 22.5 kHz. TTS was measured at a test frequency of 32 kHz. Thresholds were measured by recording rhythmic evoked potentials (the envelope following response) to a test series of short (eight cycles) tone pips with a pip rate of 1000 s(-1). TTS increased approximately proportionally to the dB measure of both sound pressure (sound pressure level, SPL) and duration of the fatiguing noise, as a product of these two variables. In particular, when the noise parameters varied in a manner that maintained the product of squared sound pressure and time (sound exposure level, SEL, which is equivalent to the overall noise energy) at a constant level, TTS was not constant. Keeping SEL constant, the highest TTS appeared at an intermediate ratio of SPL to sound duration and decreased at both higher and lower ratios. Multiplication (SPL multiplied by log duration) better described the experimental data than an equal-energy (equal SEL) model. The use of SEL as a sole universal metric may result in an implausible assessment of the impact of a fatiguing sound on hearing thresholds in odontocetes, including under-evaluation of potential risks. © 2014. Published by The Company of Biologists Ltd.
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Location and operation of sound level measurement systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL REGULATIONS COMPLIANCE WITH INTERSTATE MOTOR...
First AFSWC Javelin Sounding Rocket On Launcher at Wallops Island.
1959-07-07
Air Force Javelin Rocket on Launcher (USAF JV-1) Wallops Model D4-78 L59-5144 First AFSWC Javelin sounding rocket ready for flight test, July 7, 1959. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 704.
NASA Astrophysics Data System (ADS)
Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.
2018-04-01
Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).
Sound attenuation of fiberglass lined ventilation ducts
NASA Astrophysics Data System (ADS)
Albright, Jacob
Sound attenuation is a crucial part of designing any HVAC system. Most ventilation systems are designed to be in areas occupied by one or more persons. If these systems do not adequately attenuate the sound of the supply fan, compressor, or any other source of sound, the affected area could be subject to an array of problems ranging from an annoying hum to a deafening howl. The goals of this project are to quantify the sound attenuation properties of fiberglass duct liner and to perform a regression analysis to develop equations to predict insertion loss values for both rectangular and round duct liners. The first goal was accomplished via insertion loss testing. The tests performed conformed to the ASTM E477 standard. Using the insertion loss test data, regression equations were developed to predict insertion loss values for rectangular ducts ranging in size from 12-in x 18-in to 48-in x 48-in in lengths ranging from 3ft to 30ft. Regression equations were also developed to predict insertion loss values for round ducts ranging in diameters from 12-in to 48-in in lengths ranging from 3ft to 30ft.
49 CFR 325.9 - Measurement tolerances.
Code of Federal Regulations, 2011 CFR
2011-10-01
... reporting filed sound level measurements to the nearest whole decibel. (2) Variations resulting from... atmospheric pressure. (5) Variations resulting from reflected sound from small objects allowed within the test...
Testing a Method for Quantifying the Output of Implantable Middle Ear Hearing Devices
Rosowski, J.J.; Chien, W.; Ravicz, M.E.; Merchant, S.N.
2008-01-01
This report describes tests of a standard practice for quantifying the performance of implantable middle ear hearing devices (also known as implantable hearing aids). The standard and these tests were initiated by the Food and Drug Administration of the United States Government. The tests involved measurements on two hearing devices, one commercially available and the other home built, that were implanted into ears removed from human cadavers. The tests were conducted to investigate the utility of the practice and its outcome measures: the equivalent ear canal sound pressure transfer function that relates electrically driven middle ear velocities to the equivalent sound pressure needed to produce those velocities, and the maximum effective ear canal sound pressure. The practice calls for measurements in cadaveric ears in order to account for the varied anatomy and function of different human middle ears. PMID:17406105
Knight, Lisa; Ladich, Friedrich
2014-11-15
Thorny catfishes produce stridulation (SR) sounds using their pectoral fins and drumming (DR) sounds via a swimbladder mechanism in distress situations when hand held in water and in air. It has been argued that SR and DR sounds are aimed at different receivers (predators) in different media. The aim of this study was to analyse and compare sounds emitted in both air and water in order to test different hypotheses on the functional significance of distress sounds. Five representatives of the family Doradidae were investigated. Fish were hand held and sounds emitted in air and underwater were recorded (number of sounds, sound duration, dominant and fundamental frequency, sound pressure level and peak-to-peak amplitudes). All species produced SR sounds in both media, but DR sounds could not be recorded in air for two species. Differences in sound characteristics between media were small and mainly limited to spectral differences in SR. The number of sounds emitted decreased over time, whereas the duration of SR sounds increased. The dominant frequency of SR and the fundamental frequency of DR decreased and sound pressure level of SR increased with body size across species. The hypothesis that catfish produce more SR sounds in air and more DR sounds in water as a result of different predation pressure (birds versus fish) could not be confirmed. It is assumed that SR sounds serve as distress sounds in both media, whereas DR sounds might primarily be used as intraspecific communication signals in water in species possessing both mechanisms. © 2014. Published by The Company of Biologists Ltd.
Mackrill, J B; Jennings, P A; Cain, R
2013-01-01
Work on the perception of urban soundscapes has generated a number of perceptual models which are proposed as tools to test and evaluate soundscape interventions. However, despite the excessive sound levels and noise within hospital environments, perceptual models have not been developed for these spaces. To address this, a two-stage approach was developed by the authors to create such a model. First, semantics were obtained from listening evaluations which captured the feelings of individuals from hearing hospital sounds. Then, 30 participants rated a range of sound clips representative of a ward soundscape based on these semantics. Principal component analysis extracted a two-dimensional space representing an emotional-cognitive response. The framework enables soundscape interventions to be tested which may improve the perception of these hospital environments.
Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz
2016-05-30
Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3.93-4.11). Statistically significant differences were found across models. Reference sound levels determined in the uncontrolled group are comparable to the values obtained in the controlled group. This validates the use of biological calibration in the uncontrolled group for determining the predefined reference sound level for new devices. Moreover, due to a relatively small deviation of the reference sound level for devices of the same model, it is feasible to conduct hearing screening on devices calibrated with the predefined reference sound level.
The Development of Infants’ use of Property-poor Sounds to Individuate Objects
Wilcox, Teresa; Smith, Tracy R.
2010-01-01
There is evidence that infants as young as 4.5 months use property-rich but not property-poor sounds as the basis for individuating objects (Wilcox et al., 2006). The current research sought to identify the age at which infants demonstrate the capacity to use property-poor sounds. Using the task of Wilcox et al., infants aged 7 and 9 months were tested. The results revealed that 9- but not 7-month-olds demonstrated sensitivity to property-poor sounds (electronic tones) in an object individuation task. Additional results confirmed that the younger infants were sensitive to property-rich sounds (rattle sounds). These are the first positive results obtained with property-poor sounds in infants and lay the foundation for future research to identify the underlying basis for the developmental hierarchy favoring property-rich over property-poor sounds and possible mechanisms for change. PMID:20701977
Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.
2017-04-01
The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.
The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes
ERIC Educational Resources Information Center
Gygi, Brian; Shafiro, Valeriy
2011-01-01
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five…
ERIC Educational Resources Information Center
Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.
2017-01-01
The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…
49 CFR 325.59 - Measurement procedure; stationary test.
Code of Federal Regulations, 2011 CFR
2011-10-01
... made of the sound level generated by a stationary motor vehicle as follows: (a) Park the motor vehicle... open throttle. Return the engine's speed to idle. (e) Observe the maximum reading on the sound level... this section until the first two maximum sound level readings that are within 2 dB(A) of each other are...
Use of Authentic-Speech Technique for Teaching Sound Recognition to EFL Students
ERIC Educational Resources Information Center
Sersen, William J.
2011-01-01
The main objective of this research was to test an authentic-speech technique for improving the sound-recognition skills of EFL (English as a foreign language) students at Roi-Et Rajabhat University. The secondary objective was to determine the correlation, if any, between students' self-evaluation of sound-recognition progress and the actual…
Auditory and visual localization accuracy in young children and adults.
Martin, Karen; Johnstone, Patti; Hedrick, Mark
2015-06-01
This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Giordano, Bruno L; Egermann, Hauke; Bresin, Roberto
2014-01-01
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.
Continuous robust sound event classification using time-frequency features and deep learning
Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478
Continuous robust sound event classification using time-frequency features and deep learning.
McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
Nonverbal auditory working memory: Can music indicate the capacity?
Jeong, Eunju; Ryu, Hokyoung
2016-06-01
Different working memory (WM) mechanisms that underlie words, tones, and timbres have been proposed in previous studies. In this regard, the present study developed a WM test with nonverbal sounds and compared it to the conventional verbal WM test. A total of twenty-five, non-music major, right-handed college students were presented with four different types of sounds (words, syllables, pitches, timbres) that varied from two to eight digits in length. Both accuracy and oxygenated hemoglobin (oxyHb) were measured. The results showed significant effects of number of targets on accuracy and sound type on oxyHb. A further analysis showed prefrontal asymmetry with pitch being processed by the right hemisphere (RH) and timbre by the left hemisphere (LH). These findings suggest a potential for employing musical sounds (i.e., pitch and timbre) as a complementary stimuli for conventional nonverbal WM tests, which can additionally examine its asymmetrical roles in the prefrontal regions. Copyright © 2016 Elsevier Inc. All rights reserved.
Possibilities of psychoacoustics to determine sound quality
NASA Astrophysics Data System (ADS)
Genuit, Klaus
For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.
A closed-loop automatic control system for high-intensity acoustic test systems.
NASA Technical Reports Server (NTRS)
Slusser, R. A.
1973-01-01
Sound at sound pressure levels in the range from 130 to 160 dB is used in the investigation. Random noise is passed through a series of parallel filters, generally 1/3-octave wide. A basic automatic system is investigated because of preadjustment inaccuracies and high costs found in a study of a typical manually controlled acoustic testing system. The unit described has been successfully used in automatic acoustic tests in connection with the spacecraft tests for the Mariner 1971 program.
NASA Technical Reports Server (NTRS)
Platt, R.
1999-01-01
This is the Performance Verification Report, Final Comprehensive Performance Test (CPT) Report, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). This specification establishes the requirements for the CPT and Limited Performance Test (LPT) of the AMSU-1A, referred to here in as the unit. The sequence in which the several phases of this test procedure shall take place is shown.
Feldmann, H
1997-05-01
Since the 17th centrury it was known that sounds could be perceived via air conduction and bone conduction and that this provided a means of differentiating between hearing disorders located in the middle ear and those located in the acoustic nerve. For a long time to come, however, there was no need for such a differential diagnosis. After the invention of the tuning fork in 1711 this instrument had soon become widely used in music, but it took well over 100 years until it was introduced into physiology and otology. FROM DIRECTIONAL HEARING TO WEBER'S TEST: J. B. Venturi, a physicist in Modena, Italy, in 1802 had shown that the perception of the direction from which a sound is coming is governed by the fact that one ear is hit by the sound more intensely than the other ear. C. T. Tourtual, a physician in Münster, Germany, demonstrated in 1827 that this also holds true for sound conducted via the skull bones. He used a watch as sound source. He found that occlusion of both ear canals would increase the sensation in both ears equally, but that occlusion of only one ear would increase the sensation only in the occluded ear, thus giving the impression that the sound were coming from that side. He was interested in a comparison between vision and audition, and he concluded that with regard to recognizing the direction of a sensory signal vision was superior to audition. In the same year 1827 C. Wheatstone, a physicist in London, investigating the mode of vibration of the tympanic membrane and using a tuning fork found the same phenomena as Tourtual and some more effects. E. H. Weber, an anatomist and physiologist in Leipzig, Germany, described the very same phenomena as Tourtual and Wheatstone once more in 1834. He wanted to prove that airborne sound is perceived by the vestibulum and the semicircular canals, bone conducted sound by the cochlea. None of these investigators was thinking of a clinical use of their findings and made no such suggestion. E. Schmalz, an otologist in Dresden, Germany, in 1845 introduced the tuning fork and the test later named after Weber into otology and explained in great detail all possibilities of a diagnostic evaluation of the test. His grand achievement, however, passed unnoticed at his time. A. Rinne, a physician in Göttingen, Germany. In 1855 described the test which later was named after him, in an elaborate treatise on the physiology of the ear. He wanted to demonstrate that in man and animals living in the air, as opposed to those living in water, the conduction of sound via the bones of the skull is just an unavoidable side effect of sound perception. He mentioned a clinical application of his test only in a footnote and obviously never used it himself in a systematic way. His test was made generally known by Lucae in Berlin only after 1880. The value of Weber's and Rinne's tuning fork tests was much disputed even at the turn of the century and only gradually became generally accepted.
2017-02-01
difference from the climate -based METCM. The Tv changes are shown in Fig. 5, but given the smaller relative changes only the ±2 SD curves are presented...planning and in field tests when sounding data are not available. However, the use of climate mean profiles may lead to wide differences from actual...individual atmospheric profiles. This brief report investigates the variation of a series of soundings as compared to climate mean soundings and
Simulation and testing of a multichannel system for 3D sound localization
NASA Astrophysics Data System (ADS)
Matthews, Edward Albert
Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.
Nanocellulose based polymer composite for acoustical materials
NASA Astrophysics Data System (ADS)
Farid, Mohammad; Purniawan, Agung; Susanti, Diah; Priyono, Slamet; Ardhyananta, Hosta; Rahmasita, Mutia E.
2018-04-01
Natural fibers are biodegradable materials that are innovatively and widely used for composite reinforcement in automotive components. Nanocellulose derived from natural fibers oil palm empty bunches have properties that are remarkable for use as a composite reinforcement. However, there have not been many investigations related to the use of nanocellulose-based composites for wideband sound absorption materials. The specimens of nanocellulose-based polyester composite were prepared using a spray method. An impedance tube method was used to measure the sound absorption coefficient of this composite material. To reveal the characteristics of the nanocellulose-based polyester composite material, SEM (scanning electron microscope), TEM (Transmission Electron Microscope), FTIR (Fourier Transform Infra Red), TGA (Thermogravimetric Analysis), and density tests were performed. Sound absorption test results showed the average value of sound absorption coefficient of 0.36 to 0,46 for frequency between 500 and 4000 Hz indicating that this nanocellulose-based polyester composite materials had a tendency to wideband sound absorption materials and potentially used as automotive interior materials.
Laboratory studies of scales for measuring helicopter noise
NASA Technical Reports Server (NTRS)
Ollerhead, J. B.
1982-01-01
The adequacy of the effective perceived noise level (EPNL) procedure for rating helicopter noise annoyance was investigated. Recordings of 89 helicopters and 30 fixed wing aircraft (CTOL) flyover sounds were rated with respect to annoyance by groups of approximately 40 subjects. The average annoyance scores were transformed to annoyance levels defined as the equally annoying sound levels of a fixed reference sound. The sound levels of the test sounds were measured on various scales, with and without corrections for duration, tones, and impulsiveness. On average, the helicopter sounds were judged equally annoying to CTOL sounds when their duration corrected levels are approximately 2 dB higher. Multiple regression analysis indicated that, provided the helicopter/CTOL difference of about 2 dB is taken into account, the particular linear combination of level, duration, and tone corrections inherent in EPNL is close to optimum. The results reveal no general requirement for special EPNL correction terms to penalize helicopter sounds which are particularly impulsive; impulsiveness causes spectral and temporal changes which themselves adequately amplify conventionally measured sound levels.
Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.
Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun
2018-08-01
Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.
Design and evaluation of a parametric model for cardiac sounds.
Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador
2017-10-01
Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.
1944-08-02
sistanoe to penotration characteristics. However, it is one of the most important factors involved, and when the type of steel , its soundness and its...this test and not to use it in comparing radically differdnt types of plates, such as steel and Dural. Such a comparison can be made only by examining...in many oases, For this reason it became necessary to develop an additional test for the oharaoteristio in rolled armor ( steel "soundness) and cast
Gender Gaps in Letter-Sound Knowledge Persist Across the First School Year
Sigmundsson, Hermundur; Dybfest Eriksen, Adrian; Ofteland, Greta S.; Haga, Monika
2018-01-01
Literacy is the cornerstone of a primary school education and enables the intellectual and social development of young children. Letter-sound knowledge has been identified as critical for developing proficiency in reading. This study explored the development of letter-sound knowledge in relation to gender during the first year of primary school. 485 Norwegian children aged 5–6 years completed assessment of letter-sound knowledge, i.e., uppercase letters- name; uppercase letter -sound; lowercase letters- name; lowercase letter-sound. The children were tested in the beginning, middle, and end of their first school year. The results revealed a clear gender difference in all four variables in favor of the girls which were relatively constant over time. Implications for understanding the role of gender and letter-sound knowledge for later reading performance are discussed. PMID:29662461
NASA Technical Reports Server (NTRS)
Marshburn, Thomas; Cole, Richard; Ebert, Doug; Bauer, Pete
2014-01-01
Introduction: Evaluation of heart, lung, and bowel sounds is routinely performed with the use of a stethoscope to help detect a broad range of medical conditions. Stethoscope acquired information is even more valuable in a resource limited environments such as the International Space Station (ISS) where additional testing is not available. The high ambient noise level aboard the ISS poses a specific challenge to auscultation by stethoscope. An electronic stethoscope's ambient noise-reduction, greater sound amplification, recording capabilities, and sound visualization software may be an advantage to a conventional stethoscope in this environment. Methods: A single operator rated signal-to-noise quality from a conventional stethoscope (Littman 2218BE) and an electronic stethoscope (Litmann 3200). Borborygmi, pulmonic, and cardiac sound quality was ranked with both stethoscopes. Signal-to-noise rankings were preformed on a 1 to 10 subjective scale with 1 being inaudible, 6 the expected quality in an emergency department, 8 the expected quality in a clinic, and 10 the clearest possible quality. Testing took place in the Japanese Pressurized Module (JPM), Unity (Node 2), Destiny (US Lab), Tranquility (Node 3), and the Cupola of the International Space Station. All examinations were conducted at a single point in time. Results: The electronic stethoscope's performance ranked higher than the conventional stethoscope for each body sound in all modules tested. The electronic stethoscope's sound quality was rated between 7 and 10 in all modules tested. In comparison, the traditional stethoscope's sound quality was rated between 4 and 7. The signal to noise ratio of borborygmi showed the biggest difference between stethoscopes. In the modules tested, the auscultation of borborygmi was rated between 5 and 7 by the conventional stethoscope and consistently 10 by the electronic stethoscope. Discussion: This stethoscope comparison was limited to a single operator. However, we believe the results are noteworthy. The electronic stethoscope out preformed the traditional stethoscope in each direct comparison. Consideration should be made to incorporate an electronic stethoscope into current and future space vehicle medical kits.
Kastelein, R A; Verboom, W C; Muijsers, M; Jennings, N V; van der Heul, S
2005-05-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network is currently under development: Acoustic Communication network for Monitoring of underwater Environment in coastal areas (ACME). Marine mammals might be affected by ACME sounds since they use sounds of similar frequencies (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour porpoise. Therefore, as part of an environmental impact assessment program, two captive harbour porpoises were subjected to four sounds, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' positions and respiration rates during a test period with those during a baseline period. Each of the four sounds could be made a deterrent by increasing the amplitude of the sound. The porpoises reacted by swimming away from the sounds and by slightly, but significantly, increasing their respiration rate. From the sound pressure level distribution in the pen, and the distribution of the animals during test sessions, discomfort sound level thresholds were determined for each sound. In combination with information on sound propagation in the areas where the communication system may be deployed, the extent of the 'discomfort zone' can be estimated for several source levels (SLs). The discomfort zone is defined as the area around a sound source that harbour porpoises are expected to avoid. Based on these results, SLs can be selected that have an acceptable effect on harbour porpoises in particular areas. The discomfort zone of a communication sound depends on the selected sound, the selected SL, and the propagation characteristics of the area in which the sound system is operational. In shallow, winding coastal water courses, with sandbanks, etc., the type of habitat in which the ACME sounds will be produced, propagation loss cannot be accurately estimated by using a simple propagation model, but should be measured on site. The SL of the communication system should be adapted to each area (taking into account bounding conditions created by narrow channels, sound propagation variability due to environmental factors, and the importance of an area to the affected species). The discomfort zone should not prevent harbour porpoises from spending sufficient time in ecologically important areas (for instance feeding areas), or routes towards these areas.
Laboratory Assessment of Commercially Available Ultrasonic Rangefinders
2015-11-01
how the room was designed to prevent sound reflections (a combination of the wedges absorbing the waveforms and not having a flat wall ). When testing... sound booth at 0.5 m. ...................................................................................... 5 iv This page is intentionally...environments for sound measurements using a tape measure. This mapping method can be time- consuming and unreliable as objects frequently move around in
NASA Technical Reports Server (NTRS)
Becher, J.; Meredith, R. W.; Zuckerwar, A. J.
1981-01-01
The fabrication of parts for the acoustic ground impedance meter was completed, and the instrument tested. Acoustic ground impedance meter, automatic data processing system, cooling system for the resonant tube, and final results of sound absorption in N2-H2O gas mixtures at elevated temperatures are described.
ERIC Educational Resources Information Center
Macrae, Toby; Tyler, Ann A.
2014-01-01
Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…
NASA Astrophysics Data System (ADS)
Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an
2017-09-01
High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.
Issues Related to Large Flight Hardware Acoustic Qualification Testing
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Perry, Douglas C.; Kern, Dennis L.
2011-01-01
The characteristics of acoustical testing volumes generated by reverberant chambers or a circle of loudspeakers with and without large flight hardware within the testing volume are significantly different. The parameters attributing to these differences are normally not accounted for through analysis or acoustic tests prior to the qualification testing without the test hardware present. In most cases the control microphones are kept at least 2-ft away from hardware surfaces, chamber walls, and speaker surfaces to minimize the impact of the hardware in controlling the sound field. However, the acoustic absorption and radiation of sound by hardware surfaces may significantly alter the sound pressure field controlled within the chamber/speaker volume to a given specification. These parameters often result in an acoustic field that may provide under/over testing scenarios for flight hardware. In this paper the acoustic absorption by hardware surfaces will be discussed in some detail. A simple model is provided to account for some of the observations made from Mars Science Laboratory spacecraft that recently underwent acoustic qualification tests in a reverberant chamber.
Automated audiometry using apple iOS-based application technology.
Foulad, Allen; Bui, Peggy; Djalilian, Hamid
2013-11-01
The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Prospective diagnostic study. Setting Academic medical center. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed to compare the standardized sound output (dB) of various Apple device and headset combinations. Forty-two subjects underwent automated iOS-based hearing testing in a sound booth, automated iOS-based hearing testing in a quiet room, and conventional manual audiometry. The maximum difference in sound intensity between various Apple device and headset combinations was 4 dB. On average, 96% (95% confidence interval [CI], 91%-100%) of the threshold values obtained using the automated test in a sound booth were within 10 dB of the corresponding threshold values obtained using conventional audiometry. When the automated test was performed in a quiet room, 94% (95% CI, 87%-100%) of the threshold values were within 10 dB of the threshold values obtained using conventional audiometry. Under standardized testing conditions, 90% of the subjects preferred iOS-based audiometry as opposed to conventional audiometry. Apple iOS-based devices provide a platform for automated air conduction audiometry without requiring extra equipment and yield hearing test results that approach those of conventional audiometry.
Bhattacharyya, Parthasarathi; Mondal, Ashok; Dey, Rana; Saha, Dipanjan; Saha, Goutam
2015-05-01
Auscultation is an important part of the clinical examination of different lung diseases. Objective analysis of lung sounds based on underlying characteristics and its subsequent automatic interpretations may help a clinical practice. We collected the breath sounds from 8 normal subjects and 20 diffuse parenchymal lung disease (DPLD) patients using a newly developed instrument and then filtered off the heart sounds using a novel technology. The collected sounds were thereafter analysed digitally on several characteristics as dynamical complexity, texture information and regularity index to find and define their unique digital signatures for differentiating normality and abnormality. For convenience of testing, these characteristic signatures of normal and DPLD lung sounds were transformed into coloured visual representations. The predictive power of these images has been validated by six independent observers that include three physicians. The proposed method gives a classification accuracy of 100% for composite features for both the normal as well as lung sound signals from DPLD patients. When tested by independent observers on the visually transformed images, the positive predictive value to diagnose the normality and DPLD remained 100%. The lung sounds from the normal and DPLD subjects could be differentiated and expressed according to their digital signatures. On visual transformation to coloured images, they retain 100% predictive power. This technique may assist physicians to diagnose DPLD from visual images bearing the digital signature of the condition. © 2015 Asian Pacific Society of Respirology.
Effects of Soundscape on the Environmental Restoration in Urban Natural Environments.
Zhang, Yuan; Kang, Jian; Kang, Joe
2017-01-01
According to the attention restoration theory, directed attention is a limited physiological resource and is susceptible to fatigue by overuse. Natural environments are a healthy resource, which allows and promotes the restoration of individuals within it from their state of directed attention fatigue. This process is called the environmental restoration on individuals, and it is affected both positively and negatively by environmental factors. By considering the relationship among the three components of soundscape, that is, people, sound and the environment, this study aims to explore the effects of soundscape on the environmental restoration in urban natural environments. A field experiment was conducted with 70 participants (four groups) in an urban natural environment (Shenyang, China). Directed attention was first depleted with a 50-min 'consumption' phase, followed by a baseline measurement of attention level. Three groups then engaged in 40 min of restoration in the respective environments with similar visual surroundings but with different sounds present, after which attention levels were re-tested. The fourth group did not undergo restoration and was immediately re-tested. The difference between the two test scores, corrected for the practice effect, represents the attention restoration of individuals exposed to the respective environments. An analysis of variance was performed, demonstrating that the differences between the mean values for each group were statistically significant [sig. = 0.027 (<0.050)]. The results showed that the mean values (confidence interval of 95%) of each group are as follows: 'natural sounds group' (8.4), 'traffic sounds group' (2.4) and 'machine sounds group' (-1.8). It can be concluded that (1) urban natural environments, with natural sounds, have a positive effect on the restoration of an individuals' attention and (2) the presence of different types of sounds has significantly divergent effects on the environmental restoration.
High-pitched breath sounds indicate airflow limitation in asymptomatic asthmatic children.
Habukawa, Chizu; Nagasaka, Yukio; Murakami, Katsumi; Takemura, Tsukasa
2009-04-01
Asthmatic children may have airway dysfunction even when asymptomatic, indicating that their long-term treatment is less than optimal. Although airway dysfunction can be identified on lung function testing, performing these tests can be difficult in infants. We studied whether breath sounds reflect subtle airway dysfunction in asthmatic children. The highest frequency of inspiratory breaths sounds (HFI) and the highest frequency of expiratory breath sounds (HFE) were measured in 131 asthmatic children while asymptomatic and with no wheezes for more than 2 weeks. No child was being treated with inhaled corticosteroids (ICS). Breath sounds were recorded and analysed by sound spectrography and compared with spirometric parameters. After initial evaluation, cases with more than step 2 (mild persistent) asthma were treated using inhaled fluticasone (100-200 microg/day) for 1 month, and then breath sound analysis and pulmonary function testing were repeated. On initial evaluation, HFI correlated with the percentage of predicted FEF(50) (%FEF(50)), (r = -0.45, P < 0.001), the percentage of predicted FEF(75) (%FEF(75)) (r = -0.456, P < 0.001), and FEV(1) as a percentage of FVC (FEV(1)/FVC (%)) (r = -0.32, P < 0.001). HFI did not correlate with the percentage of predicted PEF (%PEF). The 69 children with lower than normal %FEF(50) were then treated with ICS. The %FEF(50) and %FEF(75) improved after ICS treatment, and increases in %FEF(50) (P < 0.005) correlated with decreases in HFI (P < 0.001). Higher HFI in asymptomatic asthmatic children may indicate small airway obstruction. Additional ICS treatment may improve the pulmonary function indices representing small airway function with simultaneous HFI decreases in such patients.
A research program to reduce interior noise in general aviation airplanes. [test methods and results
NASA Technical Reports Server (NTRS)
Roskam, J.; Muirhead, V. U.; Smith, H. W.; Peschier, T. D.; Durenberger, D.; Vandam, K.; Shu, T. C.
1977-01-01
Analytical and semi-empirical methods for determining the transmission of sound through isolated panels and predicting panel transmission loss are described. Test results presented include the influence of plate stiffness and mass and the effects of pressurization and vibration damping materials on sound transmission characteristics. Measured and predicted results are presented in tables and graphs.
Shift and Scale Invariant Preprocessor.
1981-12-01
perception. Although many transducers are available for converting ligbt, sound , temperature, reflected radar signals, etc., to electrical signals, the...the required tests, introduce the testing approach, and finally intepret the results. The functional block diagram of the preprocessor, figure 5, is 43...position with respect to the observation field Is irmaterial. In theory the principle is sound , but some practical limitations may degrade predicted
Results from Field Testing the RIMFAX GPR on Svalbard.
NASA Astrophysics Data System (ADS)
Hamran, S. E.; Amundsen, H. E. F.; Berger, T.; Carter, L. M.; Dypvik, H.; Ghent, R. R.; Kohler, J.; Mellon, M. T.; Nunes, D. C.; Paige, D. A.; Plettemeier, D.; Russell, P.
2017-12-01
The Radar Imager for Mars' Subsurface Experiment - RIMFAX is a Ground Penetrating Radar being developed for NASÁs MARS 2020 rover mission. The principal goals of the RIMFAX investigation are to image subsurface structures, provide context for sample sites, derive information regarding subsurface composition, and search for ice or brines. In meeting these goals, RIMFAX will provide a view of the stratigraphic section and a window into the geological and environmental history of Mars. To verify the design an Engineering Model (EM) of the radar was tested in the field in the spring 2017. Different sounding modes on the EM were tested in different types of subsurface geology on Svalbard. Deep soundings were performed on polythermal glaciers down to a couple of hundred meters. Shallow soundings were used to map a ground water table in the firn area of a glacier. A combination of deep and shallow soundings was used to image buried ice under a sedimentary layer of a couple of meters. Subsurface sedimentary layers were imaged down to more than 20 meters in sand stone permafrost. This presentation will give an overview of the RIMFAX investigation, describe the development of the radar system, and show results from field tests of the radar.
Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh; Pliskow, Jay; Myers, Kyle
2012-01-01
A repeated-measures design with block randomization was used for the study, in which 14 adults with visual impairments attempted to detect three different vehicles: a hybrid electric vehicle (HEV) with an artificially generated sound (Vehicle Sound for Pedestrians [VSP]), an HEV without the VSP, and a comparable internal combustion engine (ICE) vehicle. The VSP vehicle (mean +/- standard deviation [SD] = 38.3 +/- 14.8 m) was detected at a significantly farther distance than the HEV (mean +/- SD = 27.5 +/- 11.5 m), t = 4.823, p < 0.001, but no significant difference existed between the VSP and ICE vehicles (mean +/- SD = 34.5 +/- 14.3 m), t = 1.787, p = 0.10. Despite the overall sound level difference between the two test sites (parking lot = 48.7 dBA, roadway = 55.1 dBA), no significant difference in detection distance between the test sites was observed, F(1, 13) = 0.025, p = 0.88. No significant interaction was found between the vehicle type and test site, F(1.31, 16.98) = 0.272, p = 0.67. The findings of the study may help us understand how adding an artificially generated sound to an HEV could affect some of the orientation and mobility tasks performed by blind pedestrians.
Utility of genetic testing for the detection of late-onset hearing loss in neonates.
Lim, B Gail; Clark, Reese H; Kelleher, Amy S; Lin, Zhili; Spitzer, Alan R
2013-12-01
The purpose of this study was to demonstrate the utility of molecular testing in the detection of potentially important causes of delayed hearing loss missed by current audiometric screening at birth. We enrolled infants who had received a newborn audiometric hearing screen and a filter paper blood collection for state newborn screening. A central laboratory ran the SoundGene® panel. Of 3,681 infants studied, 35 (0.95%) had a positive SoundGene panel, 16 had mitochondrial mutations, 9 had Pendred mutations, 5 were cytomegalovirus (CMV) DNA positive, 2 had connexin mutations, and 3 had a combination of different mutations. Infants with an abnormal SoundGene panel were at increased risk for hearing loss compared to neonates without mutations. Three (8.6%) of the 35 subjects had persistent hearing loss compared to 5 (0.21%) of 2,398 subjects with no report of mutation (p < .01). Of 3,681 infants studied, 8 (0.22%) had persistent hearing loss: 5 (62.5%) had abnormal newborn audiometric screens, 2 (25%) had an abnormal SoundGene panel (1 was CMV positive, 1 had a mitochondrial mutation), and 1 (12.5%) had no identifiable risk factors. A positive SoundGene panel identifies infants who are not identified by audiometric testing and may be at risk for hearing loss.
Design and testing of a novel audio transducer to train string musical instruments
NASA Astrophysics Data System (ADS)
Cinquemani, Simone; Giberti, Hermes
2018-03-01
Stringed wooden instruments, like violins or double basses, experience a decrease in performance if they are not played for a long time. For this reason, top class instruments are usually given to musicians and played every day to preserve sound quality. The paper deals with the design, construction and testing of a device to be inserted in the bridge of a stringed wooden instrument to simulate the stresses experienced by the instrument during normal playing. The device could provide a simple, fast and inexpensive way to recover the sound of an instrument that has not been played for a period of time, or even to enhance the instrument's sound. The device is based on two magnetostrictive actuators that can exert suitable forces on the body of the violin. The device has been designed and tested to exert forces as constant as possible in the range of frequency between 10 Hz and 15kHz. Experimental tests are carried out to evaluate the effect of the device on the sound produced by the violin during a 3 weeks hours training. Two hi-quality microphones have been used to measure principal harmonics and changes during the test. Results show that in the first part of the test (approximately 100 hours) amplitudes of main harmonics widely change, while in the following their values remain constant. This behavior demonstrates the violin has reached its "nominal" status.
NASA Technical Reports Server (NTRS)
Conner, David A.; Page, Juliet A.
2002-01-01
To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.
Giordano, Bruno L.; Egermann, Hauke; Bresin, Roberto
2014-01-01
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions. PMID:25551392
Frog sound identification using extended k-nearest neighbor classifier
NASA Astrophysics Data System (ADS)
Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati
2017-09-01
Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.
Descovich, K A; Reints Bok, T E; Lisle, A T; Phillips, C J C
2013-01-01
Behavioural lateralisation is evident across most animal taxa, although few marsupial and no fossorial species have been studied. Twelve wombats (Lasiorhinus latifrons) were bilaterally presented with eight sounds from different contexts (threat, neutral, food) to test for auditory laterality. Head turns were recorded prior to and immediately following sound presentation. Behaviour was recorded for 150 seconds after presentation. Although sound differentiation was evident by the amount of exploration, vigilance, and grooming performed after different sound types, this did not result in different patterns of head turn direction. Similarly, left-right proportions of head turns, walking events, and food approaches in the post-sound period were comparable across sound types. A comparison of head turns performed before and after sound showed a significant change in turn direction (χ(2) (1)=10.65, p=.001) from a left preference during the pre-sound period (mean 58% left head turns, CI 49-66%) to a right preference in the post-sound (mean 43% left head turns, CI 40-45%). This provides evidence of a right auditory bias in response to the presentation of the sound. This study therefore demonstrates that laterality is evident in southern hairy-nosed wombats in response to a sound stimulus, although side biases were not altered by sounds of varying context.
A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS
NASA Astrophysics Data System (ADS)
Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto
At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.
Modeling and Sound Insulation Performance Analysis of Two Honeycomb-hole Coatings
NASA Astrophysics Data System (ADS)
Ye, H. F.; Tao, M.; Zhang, W. Z.
2018-05-01
During the sound transmission loss test in the standing-wave tube, the unavoidable reflected wave from the termination of downstream tube would affect the precision measurement of sound transmission loss(TL). However, it can be solved by defining the non-reflected boundary conditions when modeling based on the finite element method. Then, the model has been validated by comparing with the analytical method. Based on the present model, the sound insulation performance of two types of honeycomb-hole coatings have been analyzed and discussed. Moreover, the changes of parameters play an important role on the sound insulation performance of honeycomb-hole coating and the negative Poisson’s ratio honeycomb-hole coating has better sound insulation performance at special frequencies. Finally, it is summarized that sound insulation performance is the result of various factors that include the impedance changes, the waveform transformation and so on.
Discriminating between auditory and motor cortical responses to speech and non-speech mouth sounds
Agnew, Z.K.; McGettigan, C.; Scott, S.K.
2012-01-01
Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds, or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and non-speech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: anterior temporal lobe areas are sensitive to the acoustic/phonetic properties while motor responses may show more generalised responses to the acoustic stimuli. PMID:21812557
Informational Webinar on Dredging and Dredged Material Management in Long Island Sound
EPA Region 1 and Region 2 informational webinar on dredging and dredged material management in Long Island Sound. Topics include: dredging permit process, dredged material testing, and dredged material disposal.
Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D
2009-10-01
By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.
Development of rotorcraft interior noise control concepts. Phase 2: Full scale testing, revision 1
NASA Technical Reports Server (NTRS)
Yoerkie, C. A.; Gintoli, P. J.; Moore, J. A.
1986-01-01
The phase 2 effort consisted of a series of ground and flight test measurements to obtain data for validation of the Statistical Energy Analysis (SEA) model. Included in the gound tests were various transfer function measurements between vibratory and acoustic subsystems, vibration and acoustic decay rate measurements, and coherent source measurements. The bulk of these, the vibration transfer functions, were used for SEA model validation, while the others provided information for characterization of damping and reverberation time of the subsystems. The flight test program included measurements of cabin and cockpit sound pressure level, frame and panel vibration level, and vibration levels at the main transmission attachment locations. Comparisons between measured and predicted subsystem excitation levels from both ground and flight testing were evaluated. The ground test data show good correlation with predictions of vibration levels throughout the cabin overhead for all excitations. The flight test results also indicate excellent correlation of inflight sound pressure measurements to sound pressure levels predicted by the SEA model, where the average aircraft speech interference level is predicted within 0.2 dB.
Long-range sound propagation: A review of some experimental data
NASA Technical Reports Server (NTRS)
Sutherland, Louis C.
1990-01-01
Three experimental studies of long range sound propagation carried out or sponsored in the past by NASA are briefly reviewed to provide a partial prospective for some of the analytical studies presented in this symposium. The three studies reviewed cover (1) a unique test of two large rocket engines conducted in such a way as to provide an indication of possible atmospheric scattering loss from a large low-frequency directive sound source, (2) a year-long measurement of low frequency sound propagation which clearly demonstrated the dominant influence of the vertical gradient in the vector sound velocity towards the receiver in defining excess sound attenuation due to refraction, and (3), a series of excess ground attenuation measurements over grass and asphalt surfaces replicated several times under very similar inversion weather conditions.
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.
Oceanographic Measurements Program Review.
1982-03-01
prototype Advanced Microstructure Profiler (AMP) was completed and the unit was operationally tested in local waters (Lake Washington and Puget Sound ...Expendables ....... ............. ..21 A.W. Green The Developent of an Air-Launched ................ 25 Expendable Sound Velocimeter (AXSV); R. Bixby...8217., ,? , .’,*, ;; .,’...; "’ . :" .* " . .. ". ;’ - ~ ~ ~ ~ ’ V’ 7T W, V a .. -- THE DEVELOPMENT OF AN AIR-LAUNCHED EXPENDABLE SOUND VELOCIMETER (AXSV) Richard Bixby
Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis
McDermott, Josh H.; Simoncelli, Eero P.
2014-01-01
Rainstorms, insect swarms, and galloping horses produce “sound textures” – the collective result of many similar acoustic events. Sound textures are distinguished by temporal homogeneity, suggesting they could be recognized with time-averaged statistics. To test this hypothesis, we processed real-world textures with an auditory model containing filters tuned for sound frequencies and their modulations, and measured statistics of the resulting decomposition. We then assessed the realism and recognizability of novel sounds synthesized to have matching statistics. Statistics of individual frequency channels, capturing spectral power and sparsity, generally failed to produce compelling synthetic textures. However, combining them with correlations between channels produced identifiable and natural-sounding textures. Synthesis quality declined if statistics were computed from biologically implausible auditory models. The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by downstream neural populations. The synthesis methodology offers a powerful tool for their further investigation. PMID:21903084
Prediction of far-field wind turbine noise propagation with parabolic equation.
Lee, Seongkyu; Lee, Dongjai; Honhoff, Saskia
2016-08-01
Sound propagation of wind farms is typically simulated by the use of engineering tools that are neglecting some atmospheric conditions and terrain effects. Wind and temperature profiles, however, can affect the propagation of sound and thus the perceived sound in the far field. A better understanding and application of those effects would allow a more optimized farm operation towards meeting noise regulations and optimizing energy yield. This paper presents the parabolic equation (PE) model development for accurate wind turbine noise propagation. The model is validated against analytic solutions for a uniform sound speed profile, benchmark problems for nonuniform sound speed profiles, and field sound test data for real environmental acoustics. It is shown that PE provides good agreement with the measured data, except upwind propagation cases in which turbulence scattering is important. Finally, the PE model uses computational fluid dynamics results as input to accurately predict sound propagation for complex flows such as wake flows. It is demonstrated that wake flows significantly modify the sound propagation characteristics.
NASA Astrophysics Data System (ADS)
KAWAI, K.; YANO, T.
2002-02-01
This paper reports an experimental study determining the effects of the type and loudness of individual sounds on the overall impression of the sound environment. Field and laboratory experiments were carried out. In each experiment, subjects evaluated the sound environment presented, which consisted of combinations of three individual sounds of road traffic, singing crickets and the murmuring of a river, with five bipolar adjective scales such as Good-Bad, Active-Calm and Natural-Artificial. Overall loudness had the strongest effect on most types of evaluations; relative SPL has a greater effect than overall loudness on a particular evaluation of the natural-artificial scale. The test sounds in the field experiment were generally evaluated as more good and more natural than those in the laboratory. The results of comparisons between laboratory and field sounds indicate a difference in the trend between them. This difference may be explained by the term of selective listening but that needs further investigation.
Békésy's contributions to our present understanding of sound conduction to the inner ear.
Puria, Sunil; Rosowski, John J
2012-11-01
In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested. Copyright © 2012 Elsevier B.V. All rights reserved.
Imaging of sound speed using reflection ultrasound tomography.
Nebeker, Jakob; Nelson, Thomas R
2012-09-01
The goal of this work was to obtain and evaluate measurements of tissue sound speed in the breast, particularly dense breasts, using backscatter ultrasound tomography. An automated volumetric breast ultrasound scanner was constructed for imaging the prone patient. A 5- to 7-MHz linear array transducer acquired 17,920 radiofrequency pulse echo A-lines from the breast, and a back-wall reflector rotated over 360° in 25 seconds. Sound speed images used reflector echoes that after preprocessing were uploaded into a graphics processing unit for filtered back-projection reconstruction. A velocimeter also was constructed to measure the sound speed and attenuation for comparison to scanner performance. Measurements were made using the following: (1) deionized water from 22°C to 90°C; (2) various fluids with sound speeds from 1240 to 1904 m/s; (3) acrylamide gel test objects with features from 1 to 15 mm in diameter; and (4) healthy volunteers. The mean error ± SD between sound speed reference and image data was -0.48% ± 9.1%, and the error between reference and velocimeter measurements was -1.78% ± 6.50%. Sound speed image and velocimeter measurements showed a difference of 0.10% ± 4.04%. Temperature data showed a difference between theory and imaging performance of -0.28% ± 0.22%. Images of polyacrylamide test objects showed detectability of an approximately 1% sound speed difference in a 2.4-mm cylindrical inclusion with a contrast to noise ratio of 7.9 dB. An automated breast scanner offers the potential to make consistent automated tomographic images of breast backscatter, sound speed, and attenuation, potentially improving diagnosis, particularly in dense breasts.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.
Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
2009-12-01
The objective of this study was to assess whether training on speech processed with an eight-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of nonspeech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Twenty-four normal-hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional 24 subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and post-test sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Although both groups of subjects showed significant pre- to post-test improvements, subjects who transcribed vocoded sentences during training performed significantly better at post-test than those in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pre-test speech performance and, to a higher degree, post-test speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to use the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well ( approximately 75% correct) on the gender-identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall ( approximately 55%), suggesting that either explicit training is required to discriminate talkers' voices reliably or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that although transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone.
Rinne test: does the tuning fork position affect the sound amplitude at the ear?
Butskiy, Oleksandr; Ng, Denny; Hodgson, Murray; Nunez, Desmond A
2016-03-24
Guidelines and text-book descriptions of the Rinne test advise orienting the tuning fork tines in parallel with the longitudinal axis of the external auditory canal (EAC), presumably to maximise the amplitude of the air conducted sound signal at the ear. Whether the orientation of the tuning fork tines affects the amplitude of the sound signal at the ear in clinical practice has not been previously reported. The present study had two goals: determine if (1) there is clinician variability in tuning fork placement when presenting the air-conduction stimulus during the Rinne test; (2) the orientation of the tuning fork tines, parallel versus perpendicular to the EAC, affects the sound amplitude at the ear. To assess the variability in performing the Rinne test, the Canadian Society of Otolaryngology - Head and Neck Surgery members were surveyed. The amplitudes of the sound delivered to the tympanic membrane with the activated tuning fork tines held in parallel, and perpendicular to, the longitudinal axis of the EAC were measured using a Knowles Electronics Mannequin for Acoustic Research (KEMAR) with the microphone of a sound level meter inserted in the pinna insert. 47.4 and 44.8% of 116 survey responders reported placing the fork parallel and perpendicular to the EAC respectively. The sound intensity (sound-pressure level) recorded at the tympanic membrane with the 512 Hz tuning fork tines in parallel with as opposed to perpendicular to the EAC was louder by 2.5 dB (95% CI: 1.35, 3.65 dB; p < 0.0001) for the fundamental frequency (512 Hz), and by 4.94 dB (95% CI: 3.10, 6.78 dB; p < 0.0001) and 3.70 dB (95% CI: 1.62, 5.78 dB; p = .001) for the two harmonic (non-fundamental) frequencies (1 and 3.15 kHz), respectively. The 256 Hz tuning fork in parallel with the EAC as opposed to perpendicular to was louder by 0.83 dB (95% CI: -0.26, 1.93 dB; p = 0.14) for the fundamental frequency (256 Hz), and by 4.28 dB (95% CI: 2.65, 5.90 dB; p < 0.001) and 1.93 dB (95% CI: 0.26, 3.61 dB; p = .02) for the two harmonic frequencies (500 and 4 kHz) respectively. Clinicians vary in their orientation of the tuning fork tines in relation to the EAC when performing the Rinne test. Placement of the tuning fork tines in parallel as opposed to perpendicular to the EAC results in a higher sound amplitude at the level of the tympanic membrane.
Improved Calibration Of Acoustic Plethysmographic Sensors
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J.; Davis, David C.
1993-01-01
Improved method of calibration of acoustic plethysmographic sensors involves acoustic-impedance test conditions like those encountered in use. Clamped aluminum tube holds source of sound (hydrophone) inside balloon. Test and reference sensors attached to outside of balloon. Sensors used to measure blood flow, blood pressure, heart rate, breathing sounds, and other vital signs from surfaces of human bodies. Attached to torsos or limbs by straps or adhesives.
40 CFR 204.54 - Test procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., left, front, and back sides and top of the test unit. The microphone position to the right, left, front... calculated by the following method: L=10 log (1/5[Antilog L1/10+Antilog L 2/10+Antilog L 3/10+ Antilog L 4/10+ Antilog L 5/10]) Where: L=The average A-weighted sound level (in decibels) L 1=The A-weighted sound level...
40 CFR 204.54 - Test procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., left, front, and back sides and top of the test unit. The microphone position to the right, left, front... calculated by the following method: L=10 log (1/5[Antilog L1/10+Antilog L 2/10+Antilog L 3/10+ Antilog L 4/10+ Antilog L 5/10]) Where: L=The average A-weighted sound level (in decibels) L 1=The A-weighted sound level...
77 FR 19413 - Petition for Waiver of Compliance
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-30
...-0023. UP seeks to use an automated sound measurement system (ASMS) to test locomotive horns as required in 49 CFR 229.129(b). The ASMS uses a Class 1 sound-level measuring instrument that is permanently...
NASA Astrophysics Data System (ADS)
Kauahikaua, J.
A controlled source, time domain electromagnetic (TDEM) sounding survey was conducted in the Calico Hills area of the Nevada Test Site (NTS). The geoelectric structure was determined as an aid in the evaluation of the site for possible future storage of spent nuclear fuel or high level nuclear waste. The data were initially interpreted with a simple scheme that produces an apparent resistivity versus depth curve from the vertical magnetic field data. These curves are qualitatively interpreted much like standard Schlumberger resistivity sounding curves. Final interpretation made use of a layered earth Marquardt inversion computer program. The results combined with those from a set of Schlumberger soundings in the area show that there is a moderately resistive basement at a depth no greater than 800 meters. The basement resistivity is greater than 100 ohm meters.
Enhancement of acoustical performance of hollow tube sound absorber
NASA Astrophysics Data System (ADS)
Putra, Azma; Khair, Fazlin Abd; Nor, Mohd Jailani Mohd
2016-03-01
This paper presents acoustical performance of hollow structures utilizing the recycled lollipop sticks as acoustic absorbers. The hollow cross section of the structures is arranged facing the sound incidence. The effects of different length of the sticks and air gap on the acoustical performance are studied. The absorption coefficient was measured using impedance tube method. Here it is found that improvement on the sound absorption performance is achieved by introducing natural kapok fiber inserted into the void between the hollow structures. Results reveal that by inserting the kapok fibers, both the absorption bandwidth and the absorption coefficient increase. For test sample backed by a rigid surface, best performance of sound absorption is obtained for fibers inserted at the front and back sides of the absorber. And for the case of test sample with air gap, this is achieved for fibers introduced only at the back side of the absorber.
Knobel, Keila Alessandra Baraldi; Lima, Maria Cecília Marconi Pinheiro
2014-01-01
Exposure to loud sound during leisure activities for long periods of time is an important area to implement preventive health education, especially among young people. The aim was to identify the relations among awareness about the damaging effects of loud levels of sounds, previous exposures do loud sounds, preferences-related to sound levels and knowledge about hearing protection with age, gender, and their parent's educational level among children. Prospective cross-sectional. Seven hundred and forty students (5-16 years old) and 610 parents participated in the study. Chi-square test, Fisher exact test and linear regression. About 86.5% of the children consider that loud sounds damage the ears and 53.7% dislike noisy places. Children were previously exposed to parties and concerts with loud music, Mardi Gras, firecrackers and loud music at home or in the car and loud music with earphones. About 18.4% of the younger children could select the volume of the music, versus 65.3% of the older ones. Children have poor information about hearing protection and do not have hearing protection device. Knowledge about the risks related to exposures to loud sounds and about strategies to protect their hearing increases with age, but preference for loud sounds and exposures to it increases too. Gender and parents' instructional level have little influence on the studied variables. Many of the children's recreational activities are noisy. It is possible that the tendency of increasing preference for loud sounds with age might be a result of a learned behavior.
Improved auscultation skills in paramedic students using a modified stethoscope.
Simon, Erin L; Lecat, Paul J; Haller, Nairmeen A; Williams, Carolyn J; Martin, Scott W; Carney, John A; Pakiela, John A
2012-12-01
The Ventriloscope® (Lecat's SimplySim, Tallmadge, OH) is a modified stethoscope used as a simulation training device for auscultation. To test the effectiveness of the Ventriloscope as a training device in teaching heart and lung auscultatory findings to paramedic students. A prospective, single-hospital study conducted in a paramedic-teaching program. The standard teaching group learned heart and lung sounds via audiocassette recordings and lecture, whereas the intervention group utilized the modified stethoscope in conjunction with patient volunteers. Study subjects took a pre-test, post-test, and a follow-up test to measure recognition of heart and lung sounds. The intervention group included 22 paramedic students and the standard group included 18 paramedic students. Pre-test scores did not differ using two-sample t-tests (standard group: t [16]=-1.63, p=0.12) and (intervention group: t [20]=-1.17, p=0.26). Improvement in pre-test to post-test scores was noted within each group (standard: t [17]=2.43, p=0.03; intervention: t [21]=4.81, p<0.0001). Follow-up scores for the standard group were not different from pre-test scores of 16.06 (t [17]=0.94, p=0.36). However, follow-up scores for the intervention group significantly improved from their respective pre-test score of 16.05 (t [21]=2.63, p=0.02). Simulation training using a modified stethoscope in conjunction with standardized patients allows for realistic learning of heart and lung sounds. This technique of simulation training achieved proficiency and better retention of heart and lung sounds in a safe teaching environment. Copyright © 2012 Elsevier Inc. All rights reserved.
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
Branstetter, Brian K; DeLong, Caroline M; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly
2016-01-01
Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.
Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly
2016-01-01
Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519
Preference test of sound among multiple alternatives in rats.
Soga, Ryo; Shiramatsu, Tomoyo Isoguchi; Takahashi, Hirokazu
2018-01-01
Conditioned place preference (CPP) tests in rodents have been well established to measure preference induced by secondary reinforcing properties, but conventional assays are not sensitive enough to measure innate, weak preference, or the primary reinforcing property of a conditioned stimulus. We designed a novel CPP assay with better sensitivity and efficiency in quantifying and ranking preference of particular sounds among multiple alternatives. Each test tone was presented according to the location of free-moving rats in the arena, where assignment of location to each tone changed in every 20-s session. We demonstrated that our assay was able to rank tone preference among 4 alternatives within 12.5 min (125 s (habituation) + 25 s/sessions × 25 sessions). In order to measure and rank sound preference, we attempted to use sojourn times with each test sound ([Formula: see text]), and a preference index (PI) based on transition matrices of initial and end sounds in every session. Both [Formula: see text] and PI revealed similar trends of innate preference in which rats preferred test conditions in the following order: silence, 40-, 20-, then 10-kHz tones. Further, rats exhibited a change in preference after an classical conditioning of the 20-kHz tone with a rewarding microstimulation of the dopaminergic system. We also demonstrated that PI was a more robust and sensitive indicator than [Formula: see text] when the locomotion activity level of rats became low due to habituation to the assay repeated over sessions. Thus, our assay offers a novel method of evaluating auditory preference that is superior to conventional CPP assays, offering promising prospects in the field of sensory neuroscience.
Doksaeter, Lise; Rune Godo, Olav; Olav Handegard, Nils; Kvadsheim, Petter H; Lam, Frans-Peter A; Donovan, Carl; Miller, Patrick J O
2009-01-01
Military antisubmarine sonars produce intense sounds within the hearing range of most clupeid fish. The behavioral reactions of overwintering herring (Clupea harengus) to sonar signals of two different frequency ranges (1-2 and 6-7 kHz), and to playback of killer whale feeding sounds, were tested in controlled exposure experiments in Vestfjorden, Norway, November 2006. The behavior of free ranging herring was monitored by two upward-looking echosounders. A vessel towing an operational naval sonar source approached and passed over one of them in a block design setup. No significant escape reactions, either vertically or horizontally, were detected in response to sonar transmissions. Killer whale feeding sounds induced vertical and horizontal movements of herring. The results indicate that neither transmission of 1-2 kHz nor 6-7 kHz have significant negative influence on herring on the received sound pressure level tested (127-197 and 139-209 dB(rms) re 1 microPa, respectively). Military sonars of such frequencies and source levels may thus be operated in areas of overwintering herring without substantially affecting herring behavior or herring fishery. The avoidance during playback of killer whale sounds demonstrates the nature of an avoidance reaction and the ability of the experimental design to reveal it.
The influence of (central) auditory processing disorder in speech sound disorders.
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein
2016-01-01
Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Using listening difficulty ratings of conditions for speech communication in rooms
NASA Astrophysics Data System (ADS)
Sato, Hiroshi; Bradley, John S.; Morimoto, Masayuki
2005-03-01
The use of listening difficulty ratings of speech communication in rooms is explored because, in common situations, word recognition scores do not discriminate well among conditions that are near to acceptable. In particular, the benefits of early reflections of speech sounds on listening difficulty were investigated and compared to the known benefits to word intelligibility scores. Listening tests were used to assess word intelligibility and perceived listening difficulty of speech in simulated sound fields. The experiments were conducted in three types of sound fields with constant levels of ambient noise: only direct sound, direct sound with early reflections, and direct sound with early reflections and reverberation. The results demonstrate that (1) listening difficulty can better discriminate among these conditions than can word recognition scores; (2) added early reflections increase the effective signal-to-noise ratio equivalent to the added energy in the conditions without reverberation; (3) the benefit of early reflections on difficulty scores is greater than expected from the simple increase in early arriving speech energy with reverberation; (4) word intelligibility tests are most appropriate for conditions with signal-to-noise (S/N) ratios less than 0 dBA, and where S/N is between 0 and 15-dBA S/N, listening difficulty is a more appropriate evaluation tool. .
NASA Astrophysics Data System (ADS)
Sugimoto, Tsuneyoshi; Uechi, Itsuki; Sugimoto, Kazuko; Utagawa, Noriyuki; Katakura, Kageyoshi
Hammering test is widely used to inspect the defects in concrete structures. However, this method has a major difficulty in inspect at high-places, such as a tunnel ceiling or a bridge girder. Moreover, its detection accuracy is dependent on a tester's experience. Therefore, we study about the non-contact acoustic inspection method of the concrete structure using the air borne sound wave and a laser Doppler vibrometer. In this method, the concrete surface is excited by air-borne sound wave emitted with a long range acoustic device (LRAD), and the vibration velocity on the concrete surface is measured by a laser Doppler vibrometer. A defect part is detected by the same flexural resonance as the hammer method. It is already shown clearly that detection of a defect can be performed from a long distance of 5 m or more using a concrete test object. Moreover, it is shown that a real concrete structure can also be applied. However, when the conventional LRAD was used as a sound source, there were problems, such as restrictions of a measurement angle and the surrounding noise. In order to solve these problems, basic examination which used the strong ultrasonic wave sound source was carried out. In the experiment, the concrete test object which includes an imitation defect from 5-m distance was used. From the experimental result, when the ultrasonic sound source was used, restrictions of a measurement angle become less severe and it was shown that circumference noise also falls dramatically.
Noise in the operating rooms of Greek hospitals.
Tsiou, Chrisoula; Efthymiatos, Gerasimos; Katostaras, Theophanis
2008-02-01
This study is an evaluation of the problem of noise pollution in operating rooms. The high sound pressure level of noise in the operating theatre has a negative impact on communication between operating room personnel. The research took place at nine Greek public hospitals with more than 400 beds. The objective evaluation consisted of sound pressure level measurements in terms of L(eq), as well as peak sound pressure levels in recordings during 43 surgeries in order to identify sources of noise. The subjective evaluation consisted of a questionnaire answered by 684 operating room personnel. The views of operating room personnel were studied using Pearson's X(2) Test and Fisher's Exact Test (SPSS Version 10.00), a t-test comparison was made of mean sound pressure levels, and the relationship of measurement duration and sound pressure level was examined using linear regression analysis (SPSS Version 13.00). The sound pressure levels of noise per operation and the sources of noise varied. The maximum measured level of noise during the main procedure of an operation was measured at L(eq)=71.9 dB(A), L(1)=84.7 dB(A), L(10)=76.2 dB(A), and L(99)=56.7 dB(A). The hospital building, machinery, tools, and people in the operating room were the main noise factors. In order to eliminate excess noise in the operating room it may be necessary to adopt a multidisciplinary approach. An improvement in environment (background noise levels), the implementation of effective standards, and the focusing of the surgical team on noise matters are considered necessary changes.
Measurement of sound emitted by flying projectiles with aeroacoustic sources
NASA Technical Reports Server (NTRS)
Cho, Y. I.; Shakkottai, P.; Harstad, K. G.; Back, L. H.
1988-01-01
Training projectiles with axisymmetric ring cavities that produce intense tones in an airstream were shot in a straight-line trajectory. A ground-based microphone was used to obtain the angular distribution of sound intensity produced from the flying projectile. Data reduction required calculation of Doppler and attenuation factors. Also, the directional sensitivity of the ground-mounted microphone was measured and used in the data reduction. A rapid angular variation of sound intensity produced from the projectile was found that can be used to plot an intensity contour map on the ground. A full-scale field test confirmed the validity of the aeroacoustic concept of producing a relatively intense whistle from the projectile, and the usefulness of short-range flight tests that yield acoustic data free of uncertainties associated with diffraction, reflection, and refraction at jet boundaries in free-jet tests.
How Do Honeybees Attract Nestmates Using Waggle Dances in Dark and Noisy Hives?
Hasegawa, Yuji; Ikeno, Hidetoshi
2011-01-01
It is well known that honeybees share information related to food sources with nestmates using a dance language that is representative of symbolic communication among non-primates. Some honeybee species engage in visually apparent behavior, walking in a figure-eight pattern inside their dark hives. It has been suggested that sounds play an important role in this dance language, even though a variety of wing vibration sounds are produced by honeybee behaviors in hives. It has been shown that dances emit sounds primarily at about 250–300 Hz, which is in the same frequency range as honeybees' flight sounds. Thus the exact mechanism whereby honeybees attract nestmates using waggle dances in such a dark and noisy hive is as yet unclear. In this study, we used a flight simulator in which honeybees were attached to a torque meter in order to analyze the component of bees' orienting response caused only by sounds, and not by odor or by vibrations sensed by their legs. We showed using single sound localization that honeybees preferred sounds around 265 Hz. Furthermore, according to sound discrimination tests using sounds of the same frequency, honeybees preferred rhythmic sounds. Our results demonstrate that frequency and rhythmic components play a complementary role in localizing dance sounds. Dance sounds were presumably developed to share information in a dark and noisy environment. PMID:21603608
A Brief Historical Survey of Rocket Testing Induced Acoustic Environments at NASA SSC
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.
2012-01-01
A survey was conducted of all the various rocket test programs that have been performed since the establishment of NASA Stennis Space Center. The relevant information from each of these programs were compiled and used to quantify the theoretical noise source levels using the NASA approved methodology for computing "acoustic loads generated by a propulsion system" (NASA SP ]8072). This methodology, which is outlined in Reference 1, has been verified as a reliable means of determining the noise source characteristics of rocket engines. This information is being provided to establish reference environments for new government/business residents to ascertain whether or not their activities will generate acoustic environments that are more "encroaching" in the NASA Fee Area. In this report, the designation of sound power level refers to the acoustic power of the rocket engine at the engine itself. This is in contrast to the sound pressure level associated with the propagation of the acoustic energy in the surrounding air. The first part of the survey documents the "at source" sound power levels and their dominant frequency bands for the range of engines tested at Stennis. The second part of the survey discusses how the acoustic energy levels will propagate non ]uniformly from the test stands. To demonstrate this, representative acoustic sound pressure mappings in the NASA Stennis Fee Area were computed for typical engine tests on the B ]1 and E ]1 test stands.
Digital servo control of random sound test excitation. [in reverberant acoustic chamber
NASA Technical Reports Server (NTRS)
Nakich, R. B. (Inventor)
1974-01-01
A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.
Improvement of the predicted aural detection code ICHIN (I Can Hear It Now)
NASA Technical Reports Server (NTRS)
Mueller, Arnold W.; Smith, Charles D.; Lemasurier, Phillip
1993-01-01
Acoustic tests were conducted to study the far-field sound pressure levels and aural detection ranges associated with a Sikorsky S-76A helicopter in straight and level flight at various advancing blade tip Mach numbers. The flight altitude was nominally 150 meters above ground level. This paper compares the normalized predicted aural detection distances, based on the measured far-field sound pressure levels, to the normalized measured aural detection distances obtained from sound jury response measurements obtained during the same test. Both unmodified and modified versions of the prediction code ICHIN-6 (I Can Hear It Now) were used to produce the results for this study.
Experimental investigation of sound absorption of acoustic wedges for anechoic chambers
NASA Astrophysics Data System (ADS)
Belyaev, I. V.; Golubev, A. Yu.; Zverev, A. Ya.; Makashov, S. Yu.; Palchikovskiy, V. V.; Sobolev, A. F.; Chernykh, V. V.
2015-09-01
The results of measuring the sound absorption by acoustic wedges, which were performed in AC-3 and AC-11 reverberation chambers at the Central Aerohydrodynamic Institute (TsAGI), are presented. Wedges of different densities manufactured from superfine basaltic and thin mineral fibers were investigated. The results of tests of these wedges were compared to the sound absorption of wedges of the operating AC-2 anechoic facility at TsAGI. It is shown that basaltic-fiber wedges have better sound-absorption characteristics than the investigated analogs and can be recommended for facing anechoic facilities under construction.
GPS Sounding Rocket Developments
NASA Technical Reports Server (NTRS)
Bull, Barton
1999-01-01
Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data including; chemical makeup and physical processes taking place in the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft prior to their use in more expensive activities. This paper addresses the NASA Wallops Island history of GPS Sounding Rocket experience since 1994 and the development of highly accurate and useful system.
Prediction of transmission loss through an aircraft sidewall using statistical energy analysis
NASA Astrophysics Data System (ADS)
Ming, Ruisen; Sun, Jincai
1989-06-01
The transmission loss of randomly incident sound through an aircraft sidewall is investigated using statistical energy analysis. Formulas are also obtained for the simple calculation of sound transmission loss through single- and double-leaf panels. Both resonant and nonresonant sound transmissions can be easily calculated using the formulas. The formulas are used to predict sound transmission losses through a Y-7 propeller airplane panel. The panel measures 2.56 m x 1.38 m and has two windows. The agreement between predicted and measured values through most of the frequency ranges tested is quite good.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energymore » coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.« less
NASA Astrophysics Data System (ADS)
Zhou, Shiyuan; Sun, Haoyu; Xu, Chunguang; Cao, Xiandong; Cui, Liming; Xiao, Dingguo
2015-03-01
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of "energy coefficient" in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.
Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh; Pliskow, Jay; Myers, Kyle
2012-01-01
A repeated-measures design with block randomization was used for the study, in which 14 adults with visual impairments attempted to detect three different vehicles: a hybrid electric vehicle (HEV) with an artificially generated sound (Vehicle Sound for Pedestrians [VSP]), an HEV without the VSP, and a comparable internal combustion engine (ICE) vehicle. The VSP vehicle (mean +/− standard deviation [SD] = 38.3 +/− 14.8 m) was detected at a significantly farther distance than the HEV (mean +/− SD = 27.5 +/− 11.5 m), t = 4.823, p < 0.001, but no significant difference existed between the VSP and ICE vehicles (mean +/− SD = 34.5 +/− 14.3 m), t = 1.787, p = 0.10. Despite the overall sound level difference between the two test sites (parking lot = 48.7 dBA, roadway = 55.1 dBA), no significant difference in detection distance between the test sites was observed, F(1, 13) = 0.025, p = 0.88. No significant interaction was found between the vehicle type and test site, F(1.31, 16.98) = 0.272, p = 0.67. The findings of the study may help us understand how adding an artificially generated sound to an HEV could affect some of the orientation and mobility tasks performed by blind pedestrians. PMID:22773198
Testing a Mobile Version of a Cross-Chain Loran Atmospheric (M-CLASS) Sounding System.
NASA Astrophysics Data System (ADS)
Rust, W. David; Burgess, Donald W.; Maddox, Robert A.; Showell, Lester C.; Marshall, Thomas C.; Lauritsen, Dean K.
1990-02-01
We have Rested the NCAR Cross-Chain LORAN Atmospheric Sounding System (CLASS) in a fully mobile configuration, which we call M-CLASS. The sondes use LORAN-C navigation signals to allow calculation of balloon position and horizontal winds. In nonstormy environments, thermodynamics and wind data were almost always of high quality. Besides providing special soundings for operational forecasts and research programs, a major feature of mobile ballooning with M-CLASS is the ability to obtain additional data by flying other instruments on the balloons. We flew an electric field meter, along with a sonde, into storms on 8 of the initial 47 test flights in the spring of 1987. In storms, pressure, temperature, humidity, and wind data were of good quality about 80%, 75%, 60%, and 40% of the time, respectively. In a flight into a mesocyclone, we measured electric fields as high as 135 kV/m (at 10 km MSL) in a region of negative charge. The electric field data from several storms allow a quantitative assessment of conditions that accompany loss of LORAN data. LORAN tracking was lost at a median field of about 16 kV/m, and it returned at a median field of about 7 kV/m. Corona discharge from the LORAN antenna on the sonde was a cause of the loss of LORAN. We provided our early-afternoon M-CLASS test soundings to the National Weather Service Forecast Office in Norman, Oklahoma, in near real-time via amateur packet radio and also to the National Severe Storms Forecast Center. These soundings illustrate the potential for improving operational forecasts. Other test flights showed that M-CLASS data can provide high-resolution information on evolution of the Great Plains low-level jet stream. Our intercept of Hurricane Gilbert provided M-CLASS soundings in the right quadrant of the storm. We observed substantial wind shear in the lowest levels of the soundings around the time tornadoes were reported in south Texas. This intercept demonstrated the feasibility of taking M-CLASS data during the landfall phase of hurricanes and tropical storms.
Rodriguez, Amanda I; Thomas, Megan L A; Fitzpatrick, Denis; Janky, Kristen L
Vestibular evoked myogenic potential (VEMP) testing is increasingly utilized in pediatric vestibular evaluations due to its diagnostic capability to identify otolith dysfunction and feasibility of testing. However, there is evidence demonstrating that the high-intensity stimulation level required to elicit a reliable VEMP response causes acoustic trauma in adults. Despite utility of VEMP testing in children, similar findings are unknown. It is hypothesized that increased sound exposure may exist in children because differences in ear-canal volume (ECV) compared with adults, and the effect of stimulus parameters (e.g., signal duration and intensity) will alter exposure levels delivered to a child's ear. The objectives of this study are to (1) measure peak to peak equivalent sound pressure levels (peSPL) in children with normal hearing (CNH) and young adults with normal hearing (ANH) using high-intensity VEMP stimuli, (2) determine the effect of ECV on peSPL and calculate a safe exposure level for VEMP, and (3) assess whether cochlear changes exist after VEMP exposure. This was a 2-phase approach. Fifteen CNH and 12 ANH participated in phase I. Equivalent ECV was measured. In 1 ear, peSPL was recorded for 5 seconds at 105 to 125 dB SPL, in 5-dB increments for 500- and 750-Hz tone bursts. Recorded peSPL values (accounting for stimulus duration) were then used to calculate safe sound energy exposure values for VEMP testing using the 132-dB recommended energy allowance from the 2003 European Union Guidelines. Fifteen CNH and 10 ANH received cervical and ocular VEMP testing in 1 ear in phase II. Subjects completed tympanometry, pre- and postaudiometric threshold testing, distortion product otoacoustic emissions, and questionnaire addressing subjective otologic symptoms to study the effect of VEMP exposure on cochlear function. (1) In response to high-intensity stimulation levels (e.g., 125 dB SPL), CNH had significantly higher peSPL measurements and smaller ECVs compared with ANH. (2) A significant linear relationship between equivalent ECV (as measured by diagnostic tympanometry) and peSPL exists and has an effect on total sound energy exposure level; based on data from phase I, 120 dB SPL was determined to be an acoustically safe stimulation level for testing in children. (3) Using calculated safe stimulation level for VEMP testing, there were no significant effect of VEMP exposure on cochlear function (as measured by audiometric thresholds, distortion product otoacoustic emission amplitude levels, or subjective symptoms) in CNH and ANH. peSPL sound recordings in children's ears are significantly higher (~3 dB) than that in adults in response to high-intensity VEMP stimuli that are commonly practiced. Equivalent ECV contributes to peSPL delivered to the ear during VEMP testing and should be considered to determine safe acoustic VEMP stimulus parameters; children with smaller ECVs are at risk for unsafe sound exposure during routine VEMP testing, and stimuli should not exceed 120 dB SPL. Using 120 dB SPL stimulus level for children during VEMP testing yields no change to cochlear function and reliable VEMP responses.
2010-02-01
unlimited This report details the preliminary testing of advanced technology development for clinical auscultation in high noise environments. The Noise...represents a viable answer to the need for clinical auscultation in high noise environments across the spectrum of casualty care. stethoscope, auscultation ...sounds, and these sounds prove very useful for clinical assessment and diagnosis (Bloch, 1993). Since that time, clinical auscultation by way of a
Evaluative Conditioning Induces Changes in Sound Valence
Bolders, Anna C.; Band, Guido P. H.; Stallen, Pieter Jan
2012-01-01
Through evaluative conditioning (EC) a stimulus can acquire an affective value by pairing it with another affective stimulus. While many sounds we encounter daily have acquired an affective value over life, EC has hardly been tested in the auditory domain. To get a more complete understanding of affective processing in auditory domain we examined EC of sound. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US). Congruency effects on an affective priming task for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether extinction occurs, i.e., whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results provide clear evidence for EC effects in the auditory domain. We will argue that both associative as well as propositional processes are likely to underlie these effects. PMID:22514545
Multichannel sound reinforcement systems at work in a learning environment
NASA Astrophysics Data System (ADS)
Malek, John; Campbell, Colin
2003-04-01
Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.
Introduction to the Special Issue on Sounding Rockets and Instrumentation
NASA Astrophysics Data System (ADS)
Christe, Steven; Zeiger, Ben; Pfaff, Rob; Garcia, Michael
2016-03-01
Rocket technology, originally developed for military applications, has provided a low-cost observing platform to carry critical and rapid-response scientific investigations for over 70 years. Even with the development of launch vehicles that could put satellites into orbit, high altitude sounding rockets have remained relevant. In addition to science observations, sounding rockets provide a unique technology test platform and a valuable training ground for scientists and engineers. Most importantly, sounding rockets remain the only way to explore the tenuous regions of the Earth’s atmosphere (the upper stratosphere, mesosphere, and lower ionosphere/thermosphere) above balloon altitudes (˜40km) and below satellite orbits (˜160km). They can lift remote sensing telescope payloads with masses up to 400kg to altitudes of 350km providing observing times of up to 6min above the blocking influence of Earth’s atmosphere. Though a number of sounding rocket research programs exist around the world, this article focuses on the NASA Sounding Rocket Program, and particularly on the astrophysical and solar sounding rocket payloads.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350
NASA Astrophysics Data System (ADS)
Chen, Xiaol; Guo, Bei; Tuo, Jinliang; Zhou, Ruixin; Lu, Yang
2017-08-01
Nowadays, people are paying more and more attention to the noise reduction of household refrigerator compressor. This paper established a sound field bounded by compressor shell and ISO3744 standard field points. The Acoustic Transfer Vector (ATV) in the sound field radiated by a refrigerator compressor shell were calculated which fits the test result preferably. Then the compressor shell surface is divided into several parts. Based on Acoustic Transfer Vector approach, the sound pressure contribution to the field points and the sound power contribution to the sound field of each part were calculated. To obtain the noise radiation in the sound field, the sound pressure cloud charts were analyzed, and the contribution curves in different frequency of each part were acquired. Meanwhile, the sound power contribution of each part in different frequency was analyzed, to ensure those parts where contributes larger sound power. Through the analysis of acoustic contribution, those parts where radiate larger noise on the compressor shell were determined. This paper provides a credible and effective approach on the structure optimal design of refrigerator compressor shell, which is meaningful in the noise and vibration reduction.
Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making
Conrad, Markus; Schmidtke, David; Jacobs, Arthur
2018-01-01
Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word’s meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words’ meaning (i.e. affective meaning) and words’ sound (i.e. affective sound); both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant) and arousal (ranging from calm to excited). We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1). In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a), and through acoustic models that we implemented based on pseudoword materials (study2b). In line with our hypothesis, the estimated contribution of words’ sound to ratings of words’ affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in ‘piss’) feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they refer to. Rather, even in silent reading, words’ acoustic profiles provide affective perceptual cues that language users may implicitly use to construct words’ overall meaning. PMID:29874293
Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making.
Aryani, Arash; Conrad, Markus; Schmidtke, David; Jacobs, Arthur
2018-01-01
Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning) and words' sound (i.e. affective sound); both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant) and arousal (ranging from calm to excited). We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1). In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a), and through acoustic models that we implemented based on pseudoword materials (study2b). In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss') feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they refer to. Rather, even in silent reading, words' acoustic profiles provide affective perceptual cues that language users may implicitly use to construct words' overall meaning.
Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L
2018-01-01
Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.
Kastelein, Ronald A; van Heerden, Dorianne; Gransier, Robin; Hoek, Lean
2013-12-01
The high under-water sound pressure levels (SPLs) produced during pile driving to build offshore wind turbines may affect harbor porpoises. To estimate the discomfort threshold of pile driving sounds, a porpoise in a quiet pool was exposed to playbacks (46 strikes/min) at five SPLs (6 dB steps: 130-154 dB re 1 μPa). The spectrum of the impulsive sound resembled the spectrum of pile driving sound at tens of kilometers from the pile driving location in shallow water such as that found in the North Sea. The animal's behavior during test and baseline periods was compared. At and above a received broadband SPL of 136 dB re 1 μPa [zero-peak sound pressure level: 151 dB re 1 μPa; t90: 126 ms; sound exposure level of a single strike (SELss): 127 dB re 1 μPa(2) s] the porpoise's respiration rate increased in response to the pile driving sounds. At higher levels, he also jumped out of the water more often. Wild porpoises are expected to move tens of kilometers away from offshore pile driving locations; response distances will vary with context, the sounds' source level, parameters influencing sound propagation, and background noise levels. Copyright © 2013 Elsevier Ltd. All rights reserved.
Exposure to impulse noise at an explosives company: a case study.
Kulik, Aleksandra; Malinowska-Borowska, Jolanta
2018-02-15
Impulse noise encountered in workplaces is a threat to hearing. The aim of this study was to assess the occupational exposure to impulse noise produced by detonation of dynamite on the premises of an explosives company. Test points were located on the blast test area (inside and outside the bunker) and in work buildings across the site. Noise propagation measurement was performed during 130 blast tests at nine measurement points. At every point, at least 10 separate measurements of A-weighted equivalent sound pressure level (L A eq ), maximum A-weighted sound pressure level (L A max ) and C-weighted peak sound pressure level (L C peak ) were made. Noise recorded in the blast test area exceeded occupational exposure limits (OELs). Noise levels measured in buildings did not exceed OELs. Results of the survey showed that for 62% of respondents, impulse noise causes difficulties in performing work. The most commonly reported symptoms include headaches, nervousness and irritability.
National Report on the NASA Sounding Rocket and Balloon Programs
NASA Technical Reports Server (NTRS)
Eberspeaker, Philip; Fairbrother, Debora
2013-01-01
The U. S. National Aeronautics and Space Administration (NASA) Sounding Rockets and Balloon Programs conduct a total of 30 to 40 missions per year in support of the NASA scientific community and other users. The NASA Sounding Rockets Program supports the science community by integrating their experiments into the sounding rocket payloads, and providing both the rocket vehicle and launch operations services. Activities since 2011 have included two flights from Andoya Rocket Range, more than eight flights from White Sands Missile Range, approximately sixteen flights from Wallops Flight Facility, two flights from Poker Flat Research Range, and four flights from Kwajalein Atoll. Other activities included the final developmental flight of the Terrier-Improved Malemute launch vehicle, a test flight of the Talos-Terrier-Oriole launch vehicle, and a host of smaller activities to improve program support capabilities. Several operational missions have utilized the new Terrier-Malemute vehicle. The NASA Sounding Rockets Program is currently engaged in the development of a new sustainer motor known as the Peregrine. The Peregrine development effort will involve one static firing and three flight tests with a target completion data of August 2014. The NASA Balloon Program supported numerous scientific and developmental missions since its last report. The program conducted flights from the U.S., Sweden, Australia, and Antarctica utilizing standard and experimental vehicles. Of particular note are the successful test flights of the Wallops Arc Second Pointer (WASP), the successful demonstration of a medium-size Super Pressure Balloon (SPB), and most recently, three simultaneous missions aloft over Antarctica. NASA continues its successful incremental design qualification program and will support a science mission aboard WASP in late 2013 and a science mission aboard the SPB in early 2015. NASA has also embarked on an intra-agency collaboration to launch a rocket from a balloon to conduct supersonic decelerator tests. An overview of NASA's Sounding Rockets and Balloon Operations, Technology Development and Science support activities will be presented.
Acoustic agglomeration of fine particles based on a high intensity acoustical resonator
NASA Astrophysics Data System (ADS)
Zhao, Yun; Zeng, Xinwu; Tian, Zhangfu
2015-10-01
Acoustic agglomeration (AA) is considered to be a promising method for reducing the air pollution caused by fine aerosol particles. Removal efficiency and energy consuming are primary parameters and generally have a conflict with each other for the industry applications. It was proved that removal efficiency is increased with sound intensity and optimal frequency is presented for certain polydisperse aerosol. As a result, a high efficiency and low energy cost removal system was constructed using acoustical resonance. High intensity standing wave is generated by a tube system with abrupt section driven by four loudspeakers. Numerical model of the tube system was built base on the finite element method, and the resonance condition and SPL increase were confirmd. Extensive tests were carried out to investigate the acoustic field in the agglomeration chamber. Removal efficiency of fine particles was tested by the comparison of filter paper mass and particle size distribution at different operating conditions including sound pressure level (SPL), and frequency. The experimental study has demonstrated that agglomeration increases with sound pressure level. Sound pressure level in the agglomeration chamber is between 145 dB and 165 dB from 500 Hz to 2 kHz. The resonance frequency can be predicted with the quarter tube theory. Sound pressure level gain of more than 10 dB is gained at resonance frequency. With the help of high intensity sound waves, fine particles are reduced greatly, and the AA effect is enhanced at high SPL condition. The optimal frequency is 1.1kHz for aerosol generated by coal ash. In the resonace tube, higher resonance frequencies are not the integral multiplies of the first one. As a result, Strong nonlinearity is avoided by the dissonant characteristic and shock wave is not found in the testing results. The mechanism and testing system can be used effectively in industrial processes in the future.
The impact of sound-field systems on learning and attention in elementary school classrooms.
Dockrell, Julie E; Shield, Bridget
2012-08-01
The authors evaluated the installation and use of sound-field systems to investigate the impact of these systems on teaching and learning in elementary school classrooms. Methods The evaluation included acoustic surveys of classrooms, questionnaire surveys of students and teachers, and experimental testing of students with and without the use of sound-field systems. In this article, the authors report students' perceptions of classroom environments and objective data evaluating change in performance on cognitive and academic assessments with amplification over a 6-month period. Teachers were positive about the use of sound-field systems in improving children's listening and attention to verbal instructions. Over time, students in amplified classrooms did not differ from those in nonamplified classrooms in their reports of listening conditions, nor did their performance differ in measures of numeracy, reading, or spelling. Use of sound-field systems in the classrooms resulted in significantly larger gains in performance in the number of correct items on the nonverbal measure of speed of processing and the measure of listening comprehension. Analysis controlling for classroom acoustics indicated that students' listening comprehension scores improved significantly in amplified classrooms with poorer acoustics but not in amplified classrooms with better acoustics. Both teacher ratings and student performance on standardized tests indicated that sound-field systems improved performance on children's understanding of spoken language. However, academic attainments showed no benefits from the use of sound-field systems. Classroom acoustics were a significant factor influencing the efficacy of sound-field systems; children in classes with poorer acoustics benefited in listening comprehension, whereas there was no additional benefit for children in classrooms with better acoustics.
NASA Astrophysics Data System (ADS)
Fan, C.; Tian, Y.; Wang, Z. Q.; Nie, J. K.; Wang, G. K.; Liu, X. S.
2017-06-01
In view of the noise feature and service environment of urban power substations, this paper explores the idea of compound impedance, fills some porous sound-absorption material in the first resonance cavity of the double-resonance sound-absorption material, and designs a new-type of composite acoustic board. We conduct some acoustic characterizations according to the standard test of impedance tube, and research on the influence of assembly order, the thickness and area density of the filling material, and back cavity on material sound-absorption performance. The results show that the new-type of acoustic board consisting of aluminum fibrous material as inner structure, micro-porous board as outer structure, and polyester-filled space between them, has good sound-absorption performance for low frequency and full frequency noise. When the thickness, area density of filling material and thickness of back cavity increase, the sound absorption coefficient curve peak will move toward low frequency.
Annerstedt, Matilda; Jönsson, Peter; Wallergård, Mattias; Johansson, Gerd; Karlson, Björn; Grahn, Patrik; Hansen, Ase Marie; Währborg, Peter
2013-06-13
Experimental research on stress recovery in natural environments is limited, as is study of the effect of sounds of nature. After inducing stress by means of a virtual stress test, we explored physiological recovery in two different virtual natural environments (with and without exposure to sounds of nature) and in one control condition. Cardiovascular data and saliva cortisol were collected. Repeated ANOVA measurements indicated parasympathetic activation in the group subjected to sounds of nature in a virtual natural environment, suggesting enhanced stress recovery may occur in such surroundings. The group that recovered in virtual nature without sound and the control group displayed no particular autonomic activation or deactivation. The results demonstrate a potential mechanistic link between nature, the sounds of nature, and stress recovery, and suggest the potential importance of virtual reality as a tool in this research field. Copyright © 2013 Elsevier Inc. All rights reserved.
Schafer, Erin C; Romine, Denise; Musgrave, Elizabeth; Momin, Sadaf; Huynh, Christy
2013-01-01
Previous research has suggested that electrically coupled frequency modulation (FM) systems substantially improved speech-recognition performance in noise in individuals with cochlear implants (CIs). However, there is limited evidence to support the use of electromagnetically coupled (neck loop) FM receivers with contemporary CI sound processors containing telecoils. The primary goal of this study was to compare speech-recognition performance in noise and subjective ratings of adolescents and adults using one of three contemporary CI sound processors coupled to electromagnetically and electrically coupled FM receivers from Oticon. A repeated-measures design was used to compare speech-recognition performance in noise and subjective ratings without and with the FM systems across three test sessions (Experiment 1) and to compare performance at different FM-gain settings (Experiment 2). Descriptive statistics were used in Experiment 3 to describe output differences measured through a CI sound processor. Experiment 1 included nine adolescents or adults with unilateral or bilateral Advanced Bionics Harmony (n = 3), Cochlear Nucleus 5 (n = 3), and MED-EL OPUS 2 (n = 3) CI sound processors. In Experiment 2, seven of the original nine participants were tested. In Experiment 3, electroacoustic output was measured from a Nucleus 5 sound processor when coupled to the electromagnetically coupled Oticon Arc neck loop and electrically coupled Oticon R2. In Experiment 1, participants completed a field trial with each FM receiver and three test sessions that included speech-recognition performance in noise and a subjective rating scale. In Experiment 2, participants were tested in three receiver-gain conditions. Results in both experiments were analyzed using repeated-measures analysis of variance. Experiment 3 involved electroacoustic-test measures to determine the monitor-earphone output of the CI alone and CI coupled to the two FM receivers. The results in Experiment 1 suggested that both FM receivers provided significantly better speech-recognition performance in noise than the CI alone; however, the electromagnetically coupled receiver provided significantly better speech-recognition performance in noise and better ratings in some situations than the electrically coupled receiver when set to the same gain. In Experiment 2, the primary analysis suggested significantly better speech-recognition performance in noise for the neck-loop versus electrically coupled receiver, but a second analysis, using the best performance across gain settings for each device, revealed no significant differences between the two FM receivers. Experiment 3 revealed monitor-earphone output differences in the Nucleus 5 sound processor for the two FM receivers when set to the +8 setting used in Experiment 1 but equal output when the electrically coupled device was set to a +16 gain setting and the electromagnetically coupled device was set to the +8 gain setting. Individuals with contemporary sound processors may show more favorable speech-recognition performance in noise electromagnetically coupled FM systems (i.e., Oticon Arc), which is most likely related to the input processing and signal processing pathway within the CI sound processor for direct input versus telecoil input. Further research is warranted to replicate these findings with a larger sample size and to develop and validate a more objective approach to fitting FM systems to CI sound processors. American Academy of Audiology.
Hernandé-Gatón, Patrícia; Palma-Dibb, Regina Guenka; Silva, Léa Assed Bezerra da; Faraoni, Juliana Jendiroba; de Queiroz, Alexandra Mussolino; Lucisano, Marília Pacífico; Silva, Raquel Assed Bezerra da; Nelson Filho, Paulo
2018-04-01
To evaluate the effect of ultrasonic, sonic and rotating-oscillating powered toothbrushing systems on surface roughness and wear of white spot lesions and sound enamel. 40 tooth segments obtained from third molar crowns had the enamel surface divided into thirds, one of which was not subjected to toothbrushing. In the other two thirds, sound enamel and enamel with artificially induced white spot lesions were randomly assigned to four groups (n=10) : UT: ultrasonic toothbrush (Emmi-dental); ST1: sonic toothbrush (Colgate ProClinical Omron); ST2: sonic toothbrush (Sonicare Philips); and ROT: rotating-oscillating toothbrush (control) (Oral-B Professional Care Triumph 5000 with SmartGuide). The specimens were analyzed by confocal laser microscopy for surface roughness and wear. Data were analyzed statistically by paired t-tests, Kruskal-Wallis, two-way ANOVA and Tukey's post-test (α= 0.05). The different powered toothbrushing systems did not cause a significant increase in the surface roughness of sound enamel (P> 0.05). In the ROT group, the roughness of white spot lesion surface increased significantly after toothbrushing and differed from the UT group (P< 0.05). In the ROT group, brushing promoted a significantly greater wear of white spot lesion compared with sound enamel, and this group differed significantly from the ST1 group (P< 0.05). None of the powered toothbrushing systems (ultrasonic, sonic and rotating-oscillating) caused significant alterations on sound dental enamel. However, conventional rotating-oscillating toothbrushing on enamel with white spot lesion increased surface roughness and wear. None of the powered toothbrushing systems (ultrasonic, sonic and rotating-oscillating) tested caused significant alterations on sound dental enamel. However, conventional rotating-oscillating toothbrushing on enamel with white spot lesion increased surface roughness and wear. Copyright©American Journal of Dentistry.
ERIC Educational Resources Information Center
Knight, Marcia S.; Rosenblatt, Laurence
1983-01-01
Fourteen severely multiply handicapped children with rubella syndrome, six to 16 years of age, were examined with the PLAYTEST system, an operant test procedure using sound and light as stimuli and reinforcers. (Author/MC)
Sound Naming in Neurodegenerative Disease
ERIC Educational Resources Information Center
Chow, Maggie L.; Brambati, Simona M.; Gorno-Tempini, Maria Luisa; Miller, Bruce L.; Johnson, Julene K.
2010-01-01
Modern cognitive neuroscientific theories and empirical evidence suggest that brain structures involved in movement may be related to action-related semantic knowledge. To test this hypothesis, we examined the naming of environmental sounds in patients with corticobasal degeneration (CBD) and progressive supranuclear palsy (PSP), two…
Newborn infants detect cues of concurrent sound segregation.
Bendixen, Alexandra; Háden, Gábor P; Németh, Renáta; Farkas, Dávid; Török, Miklós; Winkler, István
2015-01-01
Separating concurrent sounds is fundamental for a veridical perception of one's auditory surroundings. Sound components that are harmonically related and start at the same time are usually grouped into a common perceptual object, whereas components that are not in harmonic relation or have different onset times are more likely to be perceived in terms of separate objects. Here we tested whether neonates are able to pick up the cues supporting this sound organization principle. We presented newborn infants with a series of complex tones with their harmonics in tune (creating the percept of a unitary sound object) and with manipulated variants, which gave the impression of two concurrently active sound sources. The manipulated variant had either one mistuned partial (single-cue condition) or the onset of this mistuned partial was also delayed (double-cue condition). Tuned and manipulated sounds were presented in random order with equal probabilities. Recording the neonates' electroencephalographic responses allowed us to evaluate their processing of the sounds. Results show that, in both conditions, mistuned sounds elicited a negative displacement of the event-related potential (ERP) relative to tuned sounds from 360 to 400 ms after sound onset. The mistuning-related ERP component resembles the object-related negativity (ORN) component in adults, which is associated with concurrent sound segregation. Delayed onset additionally led to a negative displacement from 160 to 200 ms, which was probably more related to the physical parameters of the sounds than to their perceptual segregation. The elicitation of an ORN-like response in newborn infants suggests that neonates possess the basic capabilities of segregating concurrent sounds by detecting inharmonic relations between the co-occurring sounds. © 2015 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Grosveld, F.; Vanaken, J.
1978-01-01
Sound pressure levels in the test facility were studied that are caused by varying: (1) microphone positions; (2) equalizer setting; and (3) panel clamping forces. Measurements were done by using a Beranek tube or this Beranek tube in combinations with an extension tube and a special test section. In all configurations tests were executed with and without a test panel installed. The influence of the speaker back panel and the back panel of the Beranek tube on the sound pressure levels inside the test tube were also investigated. It is shown that the definition of noise reduction is more useful in relation to this test facility than transmission loss.
Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts
2012-07-01
percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent. Binaural ...Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the
CHF - tests; Congestive heart failure - tests; Cardiomyopathy - tests; HF - tests ... An echocardiogram (echo) is a test that uses sound waves to create a moving picture of the heart. The picture is much more detailed than a plain ...
Hearing aid fitting for visual and hearing impaired patients with Usher syndrome type IIa.
Hartel, B P; Agterberg, M J H; Snik, A F; Kunst, H P M; van Opstal, A J; Bosman, A J; Pennings, R J E
2017-08-01
Usher syndrome is the leading cause of hereditary deaf-blindness. Most patients with Usher syndrome type IIa start using hearing aids from a young age. A serious complaint refers to interference between sound localisation abilities and adaptive sound processing (compression), as present in today's hearing aids. The aim of this study was to investigate the effect of advanced signal processing on binaural hearing, including sound localisation. In this prospective study, patients were fitted with hearing aids with a nonlinear (compression) and linear amplification programs. Data logging was used to objectively evaluate the use of either program. Performance was evaluated with a speech-in-noise test, a sound localisation test and two questionnaires focussing on self-reported benefit. Data logging confirmed that the reported use of hearing aids was high. The linear program was used significantly more often (average use: 77%) than the nonlinear program (average use: 17%). The results for speech intelligibility in noise and sound localisation did not show a significant difference between type of amplification. However, the self-reported outcomes showed higher scores on 'ease of communication' and overall benefit, and significant lower scores on disability for the new hearing aids when compared to their previous hearing aids with compression amplification. Patients with Usher syndrome type IIa prefer a linear amplification over nonlinear amplification when fitted with novel hearing aids. Apart from a significantly higher logged use, no difference in speech in noise and sound localisation was observed between linear and nonlinear amplification with the currently used tests. Further research is needed to evaluate the reasons behind the preference for the linear settings. © 2016 The Authors. Clinical Otolaryngology Published by John Wiley & Sons Ltd.
Chalupper, Josef
2017-01-01
The benefits of combining a cochlear implant (CI) and a hearing aid (HA) in opposite ears on speech perception were examined in 15 adult unilateral CI recipients who regularly use a contralateral HA. A within-subjects design was carried out to assess speech intelligibility testing, listening effort ratings, and a sound quality questionnaire for the conditions CI alone, CIHA together, and HA alone when applicable. The primary outcome of bimodal benefit, defined as the difference between CIHA and CI, was statistically significant for speech intelligibility in quiet as well as for intelligibility in noise across tested spatial conditions. A reduction in effort on top of intelligibility at the highest tested signal-to-noise ratio was found. Moreover, the bimodal listening situation was rated to sound more voluminous, less tinny, and less unpleasant than CI alone. Listening effort and sound quality emerged as feasible and relevant measures to demonstrate bimodal benefit across a clinically representative range of bimodal users. These extended dimensions of speech perception can shed more light on the array of benefits provided by complementing a CI with a contralateral HA. PMID:28874096
Lokajíček, T; Kuchařová, A; Petružálek, M; Šachlová, Š; Svitek, T; Přikryl, R
2016-09-01
Semi-continuous ultrasonic sounding of experimental mortar bars used in the accelerated alkali silica reactivity laboratory test (ASTM C1260) is proposed as a supplementary measurement technique providing data that are highly sensitive to minor changes in the microstructure of hardening/deteriorating concrete mixture. A newly designed, patent pending, heating chamber was constructed allowing ultrasonic sounding of mortar bars, stored in accelerating solution without necessity to remove the test specimens from the bath during the measurement. Subsequent automatic data analysis of recorded ultrasonic signals proved their high correlation to the measured length changes (expansion) and their high sensitivity to microstructural changes. The changes of P-wave velocity, and of the energy, amplitude, and frequency of ultrasonic signal, were in the range of 10-80%, compared to 0.51% change of the length. Results presented in this study thus show that ultrasonic sounding seems to be more sensitive to microstructural changes due to ongoing deterioration of concrete microstructure by alkali-silica reaction than the dimensional changes. Copyright © 2016. Published by Elsevier B.V.
[A new medical education using a lung sound auscultation simulator called "Mr. Lung"].
Yoshii, Chiharu; Anzai, Takashi; Yatera, Kazuhiro; Kawajiri, Tatsunori; Nakashima, Yasuhide; Kido, Masamitsu
2002-09-01
We developed a lung sound auscultation simulator "Mr. Lung" in 2001. To improve the auscultation skills of lung sounds, we utilized this new device in our educational training facility. From June 2001 to March 2002, we used "Mr. Lung" for our small group training in which one hundred of the fifth year medical students were divided into small groups from which one group was taught every other week. The class consisted of ninety-minute training periods for auscultation of lung sounds. At first, we explained the classification of lung sounds, and then auscultation tests were performed. Namely, students listened to three cases of abnormal or adventitious lung sounds on "Mr. Lung" through their stethoscopes. Next they answered questions corresponding to the portion and quality of the sounds. Then, we explained the correct answers and how to differentiate lung sounds on "Mr. Lung". Additionally, at the beginning and the end of the lecture, five degrees of self-assessment for the auscultation of the lung sounds were performed. The ratio of correct answers for lung sounds were 36.9% for differences between bilateral lung sounds, 52.5% for coarse crackles, 34.1% for fine crackles, 69.2% for wheezes, 62.1% for rhonchi and 22.2% for stridor. Self-assessment scores were significantly higher after the class than before. The ratio of correct lung sound answers was surprisingly low among medical students. We believe repetitive auscultation of the simulator to be extremely helpful for medical education.
English Orthographic Learning in Chinese-L1 Young EFL Beginners.
Cheng, Yu-Lin
2017-12-01
English orthographic learning, among Chinese-L1 children who were beginning to learn English as a foreign language, was documented when: (1) only visual memory was at their disposal, (2) visual memory and either some letter-sound knowledge or some semantic information was available, and (3) visual memory, some letter-sound knowledge and some semantic information were all available. When only visual memory was available, orthographic learning (measured via an orthographic choice test) was meagre. Orthographic learning was significant when either semantic information or letter-sound knowledge supplemented visual memory, with letter-sound knowledge generating greater significance. Although the results suggest that letter-sound knowledge plays a more important role than semantic information, letter-sound knowledge alone does not suffice to achieve perfect orthographic learning, as orthographic learning was greatest when letter-sound knowledge and semantic information were both available. The present findings are congruent with a view that the orthography of a foreign language drives its orthographic learning more than L1 orthographic learning experience, thus extending Share's (Cognition 55:151-218, 1995) self-teaching hypothesis to include non-alphabetic L1 children's orthographic learning of an alphabetic foreign language. The little letter-sound knowledge development observed in the experiment-I control group indicates that very little letter-sound knowledge develops in the absence of dedicated letter-sound training. Given the important role of letter-sound knowledge in English orthographic learning, dedicated letter-sound instruction is highly recommended.
NASA Astrophysics Data System (ADS)
Wang, Y. S.; Shen, G. Q.; Xing, Y. F.
2014-03-01
Based on the artificial neural network (ANN) technique, an objective sound quality evaluation (SQE) model for synthesis annoyance of vehicle interior noises is presented in this paper. According to the standard named GB/T18697, firstly, the interior noises under different working conditions of a sample vehicle are measured and saved in a noise database. Some mathematical models for loudness, sharpness and roughness of the measured vehicle noises are established and performed by Matlab programming. Sound qualities of the vehicle interior noises are also estimated by jury tests following the anchored semantic differential (ASD) procedure. Using the objective and subjective evaluation results, furthermore, an ANN-based model for synthetical annoyance evaluation of vehicle noises, so-called ANN-SAE, is developed. Finally, the ANN-SAE model is proved by some verification tests with the leave-one-out algorithm. The results suggest that the proposed ANN-SAE model is accurate and effective and can be directly used to estimate sound quality of the vehicle interior noises, which is very helpful for vehicle acoustical designs and improvements. The ANN-SAE approach may be extended to deal with other sound-related fields for product quality evaluations in SQE engineering.
A simulator-based study of in-flight auscultation.
Tourtier, Jean-Pierre; Libert, Nicolas; Clapson, Patrick; Dubourdieu, Stéphane; Jost, Daniel; Tazarourte, Karim; Astaud, Cécil-Emmanuel; Debien, Bruno; Auroy, Yves
2014-04-01
The use of a stethoscope is essential to the delivery of continuous, supportive en route care during aeromedical evacuations. We compared the capability of 2 stethoscopes (electronic, Litmann 3000; conventional, Litmann Cardiology III) at detecting pathologic heart and lung sounds, aboard a C135, a medical transport aircraft. Sounds were mimicked using a mannequin-based simulator SimMan. Five practitioners examined the mannequin during a fly, with a variety of abnormalities as follows: crackles, wheezing, right and left lung silence, as well as systolic, diastolic, and Austin-Flint murmur. The comparison for diagnosis assessed (correct or wrong) between using the electronic and conventional stethoscopes were performed as a McNemar test. A total of 70 evaluations were performed. For cardiac sounds, diagnosis was right in 0/15 and 4/15 auscultations, respectively, with conventional and electronic stethoscopes (McNemar test, P = 0.13). For lung sounds, right diagnosis was found with conventional stethoscope in 10/20 auscultations versus 18/20 with electronic stethoscope (P = 0.013). Flight practitioners involved in aeromedical evacuation on C135 plane are more able to practice lung auscultation on a mannequin with this amplified stethoscope than with the traditional one. No benefit was found for heart sounds.
Spread Across Liquids: The World's First Microgravity Combustion Experiment on a Sounding Rocket
NASA Technical Reports Server (NTRS)
1995-01-01
The Spread Across Liquids (SAL) experiment characterizes how flames spread over liquid pools in a low-gravity environment in comparison to test data at Earth's gravity and with numerical models. The modeling and experimental data provide a more complete understanding of flame spread, an area of textbook interest, and add to our knowledge about on-orbit and Earthbound fire behavior and fire hazards. The experiment was performed on a sounding rocket to obtain the necessary microgravity period. Such crewless sounding rockets provide a comparatively inexpensive means to fly very complex, and potentially hazardous, experiments and perform reflights at a very low additional cost. SAL was the first sounding-rocket-based, microgravity combustion experiment in the world. It was expected that gravity would affect ignition susceptibility and flame spread through buoyant convection in both the liquid pool and the gas above the pool. Prior to these sounding rocket tests, however, it was not clear whether the fuel would ignite readily and whether a flame would be sustained in microgravity. It also was not clear whether the flame spread rate would be faster or slower than in Earth's gravity.
Effect of gap detection threshold on consistency of speech in children with speech sound disorder.
Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz
2017-02-01
The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.
Clinical Assessment of the Noise Immune Stethoscope aboard a U.S. Navy Carrier
2011-11-01
Participants rated their confidence in the use of this device to detect heart /lung sounds compared to a traditional stethoscope . A Wilcoxin rank...Figure 15. Median ratings of confidence in the use of the device to detect pathologic heart /lung sounds compared to a traditional stethoscope in...intubation versus heart /lung sounds; figure 16). To assess the ease of use compared to a traditional stethoscope , one-sample Wilcoxin signed rank tests
NASA Astrophysics Data System (ADS)
Stamminger, A.; Turner, J.; Hörschgen, M.; Jung, W.
2005-02-01
This paper describes the possibilities of sounding rockets to provide a platform for flight experiments in hypersonic conditions as a supplement to wind tunnel tests. Real flight data from measurement durations longer than 30 seconds can be compared with predictions from CFD calculations. This paper will regard projects flown on sounding rockets, but mainly describe the current efforts at Mobile Rocket Base, DLR on the SHarp Edge Flight EXperiment SHEFEX.
Smartphone threshold audiometry in underserved primary health-care contexts.
Sandström, Josefin; Swanepoel, De Wet; Carel Myburgh, Hermanus; Laurent, Claude
2016-01-01
To validate a calibrated smartphone-based hearing test in a sound booth environment and in primary health-care clinics. A repeated-measure within-subject study design was employed whereby air-conduction hearing thresholds determined by smartphone-based audiometry was compared to conventional audiometry in a sound booth and a primary health-care clinic environment. A total of 94 subjects (mean age 41 years ± 17.6 SD and range 18-88; 64% female) were assessed of whom 64 were tested in the sound booth and 30 within primary health-care clinics without a booth. In the sound booth 63.4% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dB HL corresponded to smartphone thresholds within ≤10 dB in 80.6% of cases with an average threshold difference of -1.6 dB ± 9.9 SD. In primary health-care clinics 13.7% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dBHL corresponded to smartphone thresholds within ≤10 dB in 92.9% of cases with an average threshold difference of -1.0 dB ± 7.1 SD. Accurate air-conduction audiometry can be conducted in a sound booth and without a sound booth in an underserved community health-care clinic using a smartphone.
The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank
NASA Astrophysics Data System (ADS)
Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing
2018-03-01
In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.
Hybrid waste filler filled bio-polymer foam composites for sound absorbent materials
NASA Astrophysics Data System (ADS)
Rus, Anika Zafiah M.; Azahari, M. Shafiq M.; Kormin, Shaharuddin; Soon, Leong Bong; Zaliran, M. Taufiq; Ahraz Sadrina M. F., L.
2017-09-01
Sound absorption materials are one of the major requirements in many industries with regards to the sound insulation developed should be efficient to reduce sound. This is also important to contribute in economically ways of producing sound absorbing materials which is cheaper and user friendly. Thus, in this research, the sound absorbent properties of bio-polymer foam filled with hybrid fillers of wood dust and waste tire rubber has been investigated. Waste cooking oil from crisp industries was converted into bio-monomer, filled with different proportion ratio of fillers and fabricated into bio-polymer foam composite. Two fabrication methods is applied which is the Close Mold Method (CMM) and Open Mold Method (OMM). A total of four bio-polymer foam composite samples were produce for each method used. The percentage of hybrid fillers; mixture of wood dust and waste tire rubber of 2.5 %, 5.0%, 7.5% and 10% weight to weight ration with bio-monomer. The sound absorption of the bio-polymer foam composites samples were tested by using the impedance tube test according to the ASTM E-1050 and Scanning Electron Microscope to determine the morphology and porosity of the samples. The sound absorption coefficient (α) at different frequency range revealed that the polymer foam of 10.0 % hybrid fillers shows highest α of 0.963. The highest hybrid filler loading contributing to smallest pore sizes but highest interconnected pores. This also revealed that when highly porous material is exposed to incident sound waves, the air molecules at the surface of the material and within the pores of the material are forced to vibrate and loses some of their original energy. This is concluded that the suitability of bio-polymer foam filled with hybrid fillers to be used in acoustic application of automotive components such as dashboards, door panels, cushion and etc.
Performance of an open-source heart sound segmentation algorithm on eight independent databases.
Liu, Chengyu; Springer, David; Clifford, Gari D
2017-08-01
Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.
Hemispherical breathing mode speaker using a dielectric elastomer actuator.
Hosoya, Naoki; Baba, Shun; Maeda, Shingo
2015-10-01
Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.
Lugli, Marco; Romani, Romano; Ponzi, Stefano; Bacciu, Salvatore; Parmigiani, Stefano
2009-01-01
We auditorily stimulated patients affected by subjective tinnitus with broadband noise containing a notch around their tinnitus frequency. We assessed the long-term effects on tinnitus perception in patients listening to notched noise stimuli (referred to as windowed sound therapy [WST]) by measuring the variation of subjects' tinnitus loudness over a period of 2-12 months. We tested the effectiveness of WST using non-notched broadband noise and noise of water as control sound therapies. We found a significant long-term reduction of tinnitus loudness in subjects treated with notched noise but not in those treated with control stimulations. These results point to the importance of the personalized sound treatment of tinnitus sufferers for the development of an effective tinnitus sound therapy.
Embedded System Implementation of Sound Localization in Proximal Region
NASA Astrophysics Data System (ADS)
Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao
A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.
Sound absorption and morphology characteristic of porous concrete paving blocks
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Nor, H. Md; Ramadhansyah, P. J.; Mohamed, A.; Hassan, N. Abdul; Ibrahim, M. H. Wan; Ramli, N. I.; Nazri, F. Mohamed
2017-11-01
In this study, sound absorption and morphology characteristic of Porous Concrete Paving Blocks (PCPB) at different sizes of coarse aggregate were presented. Three different sizes of coarse aggregate were used; passing 10 mm retained 5 mm (as Control), passing 8 mm retained 5 mm (8 - 5) and passing 10 mm retained 8 mm (10 - 8). The sound absorption test was conducted through the impedance tube at different frequency. It was found that the size of coarse aggregate affects the level of absorption of the specimens. It also shows that PCPB 10 - 8 resulted in high sound absorption compared to the other blocks. On the other hand, microstructure morphology of PCPB shows a clearer version of existing micro-cracks and voids inside the specimens which affecting the results of sound absorption.
Opo lidar sounding of trace atmospheric gases in the 3 - 4 μm spectral range
NASA Astrophysics Data System (ADS)
Romanovskii, Oleg A.; Sadovnikov, Sergey A.; Kharchenko, Olga V.; Yakovlev, Semen V.
2018-04-01
The applicability of a KTA crystal-based laser system with optical parametric oscillators (OPO) generation to lidar sounding of the atmosphere in the spectral range 3-4 μm is studied in this work. A technique developed for lidar sounding of trace atmospheric gases (TAG) is based on differential absorption lidar (DIAL) method and differential optical absorption spectroscopy (DOAS). The DIAL-DOAS technique is tested to estimate its efficiency for lidar sounding of atmospheric trace gases. The numerical simulation performed shows that a KTA-based OPO laser is a promising source of radiation for remote DIAL-DOAS sounding of the TAGs under study along surface tropospheric paths. A possibility of using a PD38-03-PR photodiode for the DIAL gas analysis of the atmosphere is shown.
NASA Technical Reports Server (NTRS)
Angerer, James R.; Mccurdy, David A.; Erickson, Richard A.
1991-01-01
The purpose of this investigation was to develop a noise annoyance model, superior to those already in use, for evaluating passenger response to sounds containing tonal components which may be heard within current and future commercial aircraft. The sound spectra investigated ranged from those being experienced by passengers on board turbofan powered aircraft now in service to those cabin noise spectra passengers may experience within advanced propeller-driven aircraft of the future. A total of 240 sounds were tested in this experiment. Sixty-six of these 240 sounds were steady state, while the other 174 varied temporally due to tonal beating. Here, the entire experiment is described, but the analysis is limited to those responses elicited by the 66 steady-state sounds.
The Tire Noise Performance of Nevada Highway Pavements: On-Board Sound Intensity (OBSI) Measurement
DOT National Transportation Integrated Search
2008-06-01
On Board Sound Intensity measurements were conducted on freeway segments in the vicinity of Las Vegas and Reno, Nevada in an effort to document the tire-pavement noise levels of existing pavements. Tested pavements included Portland Cement Concrete (...
DOT National Transportation Integrated Search
2006-01-01
Through analysis of earlier research and some recent on-road testing it is demonstrated that, with : adequate precaution, accurate measurement of tire/pavement noise using on-board sound : intensity (SI) can be accomplished with two intensity probes ...
An efficient robust sound classification algorithm for hearing aids.
Nordqvist, Peter; Leijon, Arne
2004-06-01
An efficient robust sound classification algorithm based on hidden Markov models is presented. The system would enable a hearing aid to automatically change its behavior for differing listening environments according to the user's preferences. This work attempts to distinguish between three listening environment categories: speech in traffic noise, speech in babble, and clean speech, regardless of the signal-to-noise ratio. The classifier uses only the modulation characteristics of the signal. The classifier ignores the absolute sound pressure level and the absolute spectrum shape, resulting in an algorithm that is robust against irrelevant acoustic variations. The measured classification hit rate was 96.7%-99.5% when the classifier was tested with sounds representing one of the three environment categories included in the classifier. False-alarm rates were 0.2%-1.7% in these tests. The algorithm is robust and efficient and consumes a small amount of instructions and memory. It is fully possible to implement the classifier in a DSP-based hearing instrument.
NASA Technical Reports Server (NTRS)
Rentz, P. E.
1976-01-01
Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.
The GISS sounding temperature impact test
NASA Technical Reports Server (NTRS)
Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.
1978-01-01
The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.
NASA Astrophysics Data System (ADS)
Coleman, Seth W.
2008-10-01
Distinct acoustic whistles are associated with the wing-beats of many doves, and are especially noticeable when doves ascend from the ground when startled. I thus hypothesized that these sounds may be used by flock-mates as cues of potential danger. To test this hypothesis, I compared the responses of mourning doves ( Zenaida macroura), northern cardinals ( Cardinalis cardinalis), and house sparrows ( Passer domesticus) to audio playbacks of dove ‘startle wing-whistles’, cardinal alarm calls, dove ‘nonstartle wing-whistles’, and sparrow ‘social chatter’. Following playbacks of startle wing-whistles and alarm calls, conspecifics and heterospecifics startled and increased vigilance more than after playbacks of other sounds. Also, the latency to return to feeding was greater following playbacks of startle wing-whistles and alarm calls than following playbacks of other sounds. These results suggest that both conspecifics and heterospecifics may attend to dove wing-whistles in decisions related to antipredator behaviors. Whether the sounds of dove wing-whistles are intentionally produced signals warrants further testing.
A closed-loop automatic control system for high-intensity acoustic test systems.
NASA Technical Reports Server (NTRS)
Slusser, R. A.
1973-01-01
Description of an automatic control system for high-intensity acoustic tests in reverberation chambers. Working in 14 one-third-octave bands from 50 to 1000 Hz, the desired sound pressure levels are set into the memory in the control system before the test. The control system then increases the sound pressure level in the reverberation chamber gradually in each of the one-third-octave bands until the level set in the memory is reached. This level is then maintained for the duration of the test. Additional features of the system are overtest protection, the capability of 'holding' the spectrum at any time, and the presence of a total test timer.
Ponnath, Abhilash; Farris, Hamilton E.
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3–10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene. PMID:25120437
Ponnath, Abhilash; Farris, Hamilton E
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.
Long, Edward R.; Carr, R. Scott; Biedenbach, James M.; Weakland, Sandra; Partridge, Valerie; Dutch, Margaret
2013-01-01
Data from toxicity tests of the pore water extracted from Puget Sound sediments were compiled from surveys conducted from 1997 to 2009. Tests were performed on 664 samples collected throughout all of the eight monitoring regions in the Sound, an area encompassing 2,294.1 km2. Tests were performed with the gametes of the Pacific purple sea urchin, Strongylocentrotus purpuratus, to measure percent fertilization success as an indicator of relative sediment quality. Data were evaluated to determine the incidence, degree of response, geographic patterns, spatial extent, and temporal changes in toxicity. This is the first survey of this kind and magnitude in Puget Sound. In the initial round of surveys of the eight regions, 40 of 381 samples were toxic for an incidence of 10.5 %. Stations classified as toxic represented an estimated total of 107.1 km2, equivalent to 4.7 % of the total area. Percent sea urchin fertilization ranged from >100 % of the nontoxic, negative controls to 0 %. Toxicity was most prevalent and pervasive in the industrialized harbors and lowest in the deep basins. Conditions were intermediate in deep-water passages, urban bays, and rural bays. A second round of testing in four regions and three selected urban bays was completed 5–10 years following the first round. The incidence and spatial extent of toxicity decreased in two of the regions and two of the bays and increased in the other two regions and the third bay; however, only the latter change was statistically significant. Both the incidence and spatial extent of toxicity were lower in the Sound than in most other US estuaries and marine bays.
Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation
Lopez-Poveda, Enrique A.; Barrios, Pablo
2013-01-01
Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176
Oak Ridge Reservation Public Warning Siren System Annual Test Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. F. Gee
2000-10-01
The full operational test of the Oak Ridge Reservation (ORR) Public Warning Siren System (PWSS) was successfully conducted on September 27, 2000. The annual test is a full-scale sounding of the individual siren systems around each of the three Department of Energy (DOE) sites in Oak Ridge, Tennessee. The purpose of the annual test is to demonstrate and validate the siren systems' ability to alert personnel outdoors in the Immediate Notification Zones (INZ) (approximately two miles) around each site. The success of this test is based on two critical functions of the siren system. The first function is system operability.more » The system is considered operable if 90% of the sirens are operational. System diagnostics and direct field observations were used to validate the operability of the siren systems. Based on the diagnostic results and field observations, greater than 90% of the sirens were considered operational. The second function is system audibility. The system is considered audible if the siren could be heard in the immediate notification zones around each of the three sites. Direct field observations, along with sound level measurements, were used to validate the audibility of the siren system. Based on the direct field observations and sound level measurements, the siren system was considered audible. The combination of field observations, system diagnostic status reports, and sound level measurements provided a high level of confidence that the system met and would meet operational requirements upon demand. As part of the overall system test, the Tennessee Emergency Management Agency (TEMA) activated the Emergency Alerting System (EAS), which utilized area radio stations to make announcements regarding the test and to remind residents of what to do in the event of an actual emergency.« less
40 CFR 202.21 - Standard for operation under stationary test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... sound level in excess of 88 dB(A) measured on an open site with fast meter response at 50 feet from the... applicable which generates a sound level in excess of 85 dB(A) measured on an open site with fast meter...
Phylogenetic review of tonal sound production in whales in relation to sociality
May-Collado, Laura J; Agnarsson, Ingi; Wartzok, Douglas
2007-01-01
Background It is widely held that in toothed whales, high frequency tonal sounds called 'whistles' evolved in association with 'sociality' because in delphinids they are used in a social context. Recently, whistles were hypothesized to be an evolutionary innovation of social dolphins (the 'dolphin hypothesis'). However, both 'whistles' and 'sociality' are broad concepts each representing a conglomerate of characters. Many non-delphinids, whether solitary or social, produce tonal sounds that share most of the acoustic characteristics of delphinid whistles. Furthermore, hypotheses of character correlation are best tested in a phylogenetic context, which has hitherto not been done. Here we summarize data from over 300 studies on cetacean tonal sounds and social structure and phylogenetically test existing hypotheses on their co-evolution. Results Whistles are 'complex' tonal sounds of toothed whales that demark a more inclusive clade than the social dolphins. Whistles are also used by some riverine species that live in simple societies, and have been lost twice within the social delphinoids, all observations that are inconsistent with the dolphin hypothesis as stated. However, cetacean tonal sounds and sociality are intertwined: (1) increased tonal sound modulation significantly correlates with group size and social structure; (2) changes in tonal sound complexity are significantly concentrated on social branches. Also, duration and minimum frequency correlate as do group size and mean minimum frequency. Conclusion Studying the evolutionary correlation of broad concepts, rather than that of their component characters, is fraught with difficulty, while limits of available data restrict the detail in which component character correlations can be analyzed in this case. Our results support the hypothesis that sociality influences the evolution of tonal sound complexity. The level of social and whistle complexity are correlated, suggesting that complex tonal sounds play an important role in social communication. Minimum frequency is higher in species with large groups, and correlates negatively with duration, which may reflect the increased distances over which non-social species communicate. Our findings are generally stable across a range of alternative phylogenies. Our study points to key species where future studies would be particularly valuable for enriching our understanding of the interplay of acoustic communication and sociality. PMID:17692128
Litovsky, Ruth Y.; Godar, Shelly P.
2010-01-01
The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369
Hermannsen, Line; Beedholm, Kristian
2017-01-01
Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155
Underwater auditory localization by a swimming harbor seal (Phoca vitulina).
Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido
2006-09-01
The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.
Tang, Jia; Fu, Zi-Ying; Wei, Chen-Xue; Chen, Qi-Cai
2015-08-01
In constant frequency-frequency modulation (CF-FM) bats, the CF-FM echolocation signals include both CF and FM components, yet the role of such complex acoustic signals in frequency resolution by bats remains unknown. Using CF and CF-FM echolocation signals as acoustic stimuli, the responses of inferior collicular (IC) neurons of Hipposideros armiger were obtained by extracellular recordings. We tested the effect of preceding CF or CF-FM sounds on the shape of the frequency tuning curves (FTCs) of IC neurons. Results showed that both CF-FM and CF sounds reduced the number of FTCs with tailed lower-frequency-side of IC neurons. However, more IC neurons experienced such conversion after adding CF-FM sound compared with CF sound. We also found that the Q 20 value of the FTC of IC neurons experienced the largest increase with the addition of CF-FM sound. Moreover, only CF-FM sound could cause an increase in the slope of the neurons' FTCs, and such increase occurred mainly in the lower-frequency edge. These results suggested that CF-FM sound could increase the accuracy of frequency analysis of echo and cut-off low-frequency elements from the habitat of bats more than CF sound.
The sound intensity and characteristics of variable-pitch pulse oximeters.
Yamanaka, Hiroo; Haruna, Junichi; Mashimo, Takashi; Akita, Takeshi; Kinouchi, Keiko
2008-06-01
Various studies worldwide have found that sound levels in hospitals significantly exceed the World Health Organization (WHO) guidelines, and that this noise is associated with audible signals from various medical devices. The pulse oximeter is now widely used in health care; however the health effects associated with the noise from this equipment remain largely unclarified. Here, we analyzed the sounds of variable-pitch pulse oximeters, and discussed the possible associated risk of sleep disturbance, annoyance, and hearing loss. The Nellcor N 595 and Masimo SET Radical pulse oximeters were measured for equivalent continuous A-weighted sound pressure levels (L(Aeq)), loudness levels, and loudness. Pulse beep pitches were also identified using Fast Fourier Transform (FFT) analysis and compared with musical pitches as controls. Almost all alarm sounds and pulse beeps from the instruments tested exceeded 30 dBA, a level that may induce sleep disturbance and annoyance. Several alarm sounds emitted by the pulse oximeters exceeded 70 dBA, which is known to induce hearing loss. The loudness of the alarm sound of each pulse oximeter did not change in proportion to the sound volume level. The pitch of each pulse beep did not correspond to musical pitch levels. The results indicate that sounds from pulse oximeters pose a potential risk of not only sleep disturbance and annoyance but also hearing loss, and that these sounds are unnatural for human auditory perception.
Device for precision measurement of speed of sound in a gas
Kelner, Eric; Minachi, Ali; Owen, Thomas E.; Burzynski, Jr., Marion; Petullo, Steven P.
2004-11-30
A sensor for measuring the speed of sound in a gas. The sensor has a helical coil, through which the gas flows before entering an inner chamber. Flow through the coil brings the gas into thermal equilibrium with the test chamber body. After the gas enters the chamber, a transducer produces an ultrasonic pulse, which is reflected from each of two faces of a target. The time difference between the two reflected signals is used to determine the speed of sound in the gas.
Ultrasound visual feedback treatment and practice variability for residual speech sound errors
Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin
2014-01-01
Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938
Audiovisual Delay as a Novel Cue to Visual Distance.
Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R; Tadin, Duje
2015-01-01
For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.
Sound propagation in light-modulated carbon nanosponge suspensions
NASA Astrophysics Data System (ADS)
Zhou, W.; Tiwari, R. P.; Annamalai, R.; Sooryakumar, R.; Subramaniam, V.; Stroud, D.
2009-03-01
Single-walled carbon nanotube bundles dispersed in a highly polar fluid are found to agglomerate into a porous structure when exposed to low levels of laser radiation. The phototunable nanoscale porous structures provide an unusual way to control the acoustic properties of the suspension. Despite the high sound speed of the nanotubes, the measured speed of longitudinal-acoustic waves in the suspension decreases sharply with increasing bundle concentration. Two possible explanations for this reduction in sound speed are considered. One is simply that the sound speed decreases because of fluid heat induced by laser light absorption by the carbon nanotubes. The second is that this decrease results from the smaller sound velocity of fluid confined in a porous medium. Using a simplified description of convective heat transport, we estimate that the increase in temperature is too small to account for the observed decrease in sound velocity. To test the second possible explanation, we calculate the sound velocity in a porous medium, using a self-consistent effective-medium approximation. The results of this calculation agree qualitatively with experiment. In this case, the observed sound wave would be the analog of the slow compressional mode of porous solids at a structural length scale of order of 100 nm.
Evaluation of the impact of noise metrics on tiltrotor aircraft design
NASA Technical Reports Server (NTRS)
Sternfeld, H.; Spencer, R.; Ziegenbein, P.
1995-01-01
A subjective noise evaluation was conducted in which the test participants evaluated the annoyance of simulated sounds representative of future civil tiltrotor aircraft. The subjective responses were correlated with the noise metrics of A-weighted sound pressure level, overall sound pressure level, and perceived level. The results indicated that correlation between subjective response and A-weighted sound pressure level is considerably enhanced by combining it in a multiple regression with overall sound pressure level. As a single metric, perceived level correlated better than A-weighted sound pressure level due to greater emphasis on low frequency noise components. This latter finding was especially true for indoor noise where the mid and high frequency noise components are attenuated by typical building structure. Using the results of the subjective noise evaluation, the impact on tiltrotor aircraft design was also evaluated. While A-weighted sound pressure level can be reduced by reduction in tip speed, an increase in number of rotor blades is required to achieve significant reduction of low frequency noise as measured by overall sound pressure level. Additional research, however, is required to achieve comparable reductions in impulsive noise due to blade-vortex interaction, and also to achieve reduction in broad band noise.
A New Mechanism of Sound Generation in Songbirds
NASA Astrophysics Data System (ADS)
Goller, Franz; Larsen, Ole N.
1997-12-01
Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.
NASA Technical Reports Server (NTRS)
Valdez, A.
1999-01-01
This document contains the procedure and the test results of the Advanced Microwave Sounding Unit-A (AMSU-A) Electromagnetic Interference (EMI), Electromagnetic Susceptibility, and Electromagnetic Compatibility (EMC) qualification test for the Meteorological Satellite (METSAT) and the Meteorological Operation Platform (METOP) projects. The test was conducted in accordance with the approved EMI/EMC Test Plan/Procedure, Specification number AE-26151/5D. This document describes the EMI/EMC test performed by Aerojet and it is presented in the following manner: Section-1 contains introductory material and a brief summary of the test results. Section 2 contains more detailed descriptions of the test plan, test procedure, and test results for each type of EMI/EMC test conducted. Section 3 contains supplementary information that includes test data sheets, plots, and calculations collected during the qualification testing.
Corollary discharge provides the sensory content of inner speech.
Scott, Mark
2013-09-01
Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one's external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one's voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.
An Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun
Fishes and other marine mammals suffer a range of potential effects from intense sound sources generated by anthropogenic underwater processes such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording devices (USR) were built to monitor the acoustic sound pressure waves generated by those anthropogenic underwater activities, so the relevant processing software becomes indispensable for analyzing the audio files recorded by these USRs. However, existing software packages did not meet performance and flexibility requirements. In this paper, we provide a detailed description of a new software package, named Aquatic Acoustic Metrics Interface (AAMI), which is a Graphicalmore » User Interface (GUI) designed for underwater sound monitoring and analysis. In addition to the general functions, such as loading and editing audio files recorded by USRs, the software can compute a series of acoustic metrics in physical units, monitor the sound's influence on fish hearing according to audiograms from different species of fishes and marine mammals, and batch process the sound files. The detailed applications of the software AAMI will be discussed along with several test case scenarios to illustrate its functionality.« less
Hearing Tests Based on Biologically Calibrated Mobile Devices: Comparison With Pure-Tone Audiometry
Grysiński, Tomasz; Kręcicki, Tomasz
2018-01-01
Background Hearing screening tests based on pure-tone audiometry may be conducted on mobile devices, provided that the devices are specially calibrated for the purpose. Calibration consists of determining the reference sound level and can be performed in relation to the hearing threshold of normal-hearing persons. In the case of devices provided by the manufacturer, together with bundled headphones, the reference sound level can be calculated once for all devices of the same model. Objective This study aimed to compare the hearing threshold measured by a mobile device that was calibrated using a model-specific, biologically determined reference sound level with the hearing threshold obtained in pure-tone audiometry. Methods Trial participants were recruited offline using face-to-face prompting from among Otolaryngology Clinic patients, who own Android-based mobile devices with bundled headphones. The hearing threshold was obtained on a mobile device by means of an open access app, Hearing Test, with incorporated model-specific reference sound levels. These reference sound levels were previously determined in uncontrolled conditions in relation to the hearing threshold of normal-hearing persons. An audiologist-assisted self-measurement was conducted by the participants in a sound booth, and it involved determining the lowest audible sound generated by the device within the frequency range of 250 Hz to 8 kHz. The results were compared with pure-tone audiometry. Results A total of 70 subjects, 34 men and 36 women, aged 18-71 years (mean 36, standard deviation [SD] 11) participated in the trial. The hearing threshold obtained on mobile devices was significantly different from the one determined by pure-tone audiometry with a mean difference of 2.6 dB (95% CI 2.0-3.1) and SD of 8.3 dB (95% CI 7.9-8.7). The number of differences not greater than 10 dB reached 89% (95% CI 88-91), whereas the mean absolute difference was obtained at 6.5 dB (95% CI 6.2-6.9). Sensitivity and specificity for a mobile-based screening method were calculated at 98% (95% CI 93-100.0) and 79% (95% CI 71-87), respectively. Conclusions The method of hearing self-test carried out on mobile devices with bundled headphones demonstrates high compatibility with pure-tone audiometry, which confirms its potential application in hearing monitoring, screening tests, or epidemiological examinations on a large scale. PMID:29321124
Hearing Tests Based on Biologically Calibrated Mobile Devices: Comparison With Pure-Tone Audiometry.
Masalski, Marcin; Grysiński, Tomasz; Kręcicki, Tomasz
2018-01-10
Hearing screening tests based on pure-tone audiometry may be conducted on mobile devices, provided that the devices are specially calibrated for the purpose. Calibration consists of determining the reference sound level and can be performed in relation to the hearing threshold of normal-hearing persons. In the case of devices provided by the manufacturer, together with bundled headphones, the reference sound level can be calculated once for all devices of the same model. This study aimed to compare the hearing threshold measured by a mobile device that was calibrated using a model-specific, biologically determined reference sound level with the hearing threshold obtained in pure-tone audiometry. Trial participants were recruited offline using face-to-face prompting from among Otolaryngology Clinic patients, who own Android-based mobile devices with bundled headphones. The hearing threshold was obtained on a mobile device by means of an open access app, Hearing Test, with incorporated model-specific reference sound levels. These reference sound levels were previously determined in uncontrolled conditions in relation to the hearing threshold of normal-hearing persons. An audiologist-assisted self-measurement was conducted by the participants in a sound booth, and it involved determining the lowest audible sound generated by the device within the frequency range of 250 Hz to 8 kHz. The results were compared with pure-tone audiometry. A total of 70 subjects, 34 men and 36 women, aged 18-71 years (mean 36, standard deviation [SD] 11) participated in the trial. The hearing threshold obtained on mobile devices was significantly different from the one determined by pure-tone audiometry with a mean difference of 2.6 dB (95% CI 2.0-3.1) and SD of 8.3 dB (95% CI 7.9-8.7). The number of differences not greater than 10 dB reached 89% (95% CI 88-91), whereas the mean absolute difference was obtained at 6.5 dB (95% CI 6.2-6.9). Sensitivity and specificity for a mobile-based screening method were calculated at 98% (95% CI 93-100.0) and 79% (95% CI 71-87), respectively. The method of hearing self-test carried out on mobile devices with bundled headphones demonstrates high compatibility with pure-tone audiometry, which confirms its potential application in hearing monitoring, screening tests, or epidemiological examinations on a large scale. ©Marcin Masalski, Tomasz Grysiński, Tomasz Kręcicki. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 10.01.2018.
Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.
2013-01-01
Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278
L-type calcium channels refine the neural population code of sound level
Grimsley, Calum Alex; Green, David Brian
2016-01-01
The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536
Underwater Sound Levels at a Wave Energy Device Testing Facility in Falmouth Bay, UK.
Garrett, Joanne K; Witt, Matthew J; Johanning, Lars
2016-01-01
Passive acoustic monitoring devices were deployed at FaBTest in Falmouth Bay, UK, a marine renewable energy device testing facility during trials of a wave energy device. The area supports considerable commercial shipping and recreational boating along with diverse marine fauna. Noise monitoring occurred during (1) a baseline period, (2) installation activity, (3) the device in situ with inactive power status, and (4) the device in situ with active power status. This paper discusses the preliminary findings of the sound recording at FabTest during these different activity periods of a wave energy device trial.
NASA Technical Reports Server (NTRS)
Dillon, Christina
2013-01-01
The goal of this project was to design, model, build, and test a flat panel speaker and frame for a spherical dome structure being made into a simulator. The simulator will be a test bed for evaluating an immersive environment for human interfaces. This project focused on the loud speakers and a sound diffuser for the dome. The rest of the team worked on an Ambisonics 3D sound system, video projection system, and multi-direction treadmill to create the most realistic scene possible. The main programs utilized in this project, were Pro-E and COMSOL. Pro-E was used for creating detailed figures for the fabrication of a frame that held a flat panel loud speaker. The loud speaker was made from a thin sheet of Plexiglas and 4 acoustic exciters. COMSOL, a multiphysics finite analysis simulator, was used to model and evaluate all stages of the loud speaker, frame, and sound diffuser. Acoustical testing measurements were utilized to create polar plots from the working prototype which were then compared to the COMSOL simulations to select the optimal design for the dome. The final goal of the project was to install the flat panel loud speaker design in addition to a sound diffuser on to the wall of the dome. After running tests in COMSOL on various speaker configurations, including a warped Plexiglas version, the optimal speaker design included a flat piece of Plexiglas with a rounded frame to match the curvature of the dome. Eight of these loud speakers will be mounted into an inch and a half of high performance acoustic insulation, or Thinsulate, that will cover the inside of the dome. The following technical paper discusses these projects and explains the engineering processes used, knowledge gained, and the projected future goals of this project
Christensen, Christian Bech; Christensen-Dalsgaard, Jakob; Brandt, Christian; Madsen, Peter Teglberg
2012-01-15
Snakes lack both an outer ear and a tympanic middle ear, which in most tetrapods provide impedance matching between the air and inner ear fluids and hence improve pressure hearing in air. Snakes would therefore be expected to have very poor pressure hearing and generally be insensitive to airborne sound, whereas the connection of the middle ear bone to the jaw bones in snakes should confer acute sensitivity to substrate vibrations. Some studies have nevertheless claimed that snakes are quite sensitive to both vibration and sound pressure. Here we test the two hypotheses that: (1) snakes are sensitive to sound pressure and (2) snakes are sensitive to vibrations, but cannot hear the sound pressure per se. Vibration and sound-pressure sensitivities were quantified by measuring brainstem evoked potentials in 11 royal pythons, Python regius. Vibrograms and audiograms showed greatest sensitivity at low frequencies of 80-160 Hz, with sensitivities of -54 dB re. 1 m s(-2) and 78 dB re. 20 μPa, respectively. To investigate whether pythons detect sound pressure or sound-induced head vibrations, we measured the sound-induced head vibrations in three dimensions when snakes were exposed to sound pressure at threshold levels. In general, head vibrations induced by threshold-level sound pressure were equal to or greater than those induced by threshold-level vibrations, and therefore sound-pressure sensitivity can be explained by sound-induced head vibration. From this we conclude that pythons, and possibly all snakes, lost effective pressure hearing with the complete reduction of a functional outer and middle ear, but have an acute vibration sensitivity that may be used for communication and detection of predators and prey.
40 CFR 205.55-2 - Compliance with standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... as described in paragraph (b) of this section, the manufacturer may elect to verify the configuration... the highest sound pressure level (dBA) based on his best technical judgment and/or emission test data... section as having the highest sound pressure level (estimated or actual) within the category; and (iv...
40 CFR 205.55-2 - Compliance with standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... as described in paragraph (b) of this section, the manufacturer may elect to verify the configuration... the highest sound pressure level (dBA) based on his best technical judgment and/or emission test data... section as having the highest sound pressure level (estimated or actual) within the category; and (iv...
University of Maryland-Republic Terrapin Sounding Rocket H121-2681-I(Terrapin) Model on the Launcher
1956-10-21
LAL 95,647 University of Maryland-Republic Terrapin sounding rocket mounted on special launcher, September 21, 1956. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 506.
NASA Technical Reports Server (NTRS)
Golden, D. P., Jr.; Wolthuis, R. A.; Hoffler, G. W.; Gowen, R. J.
1974-01-01
Frequency bands that best discriminate the Korotkov sounds at systole and at diastole from the sounds immediately preceding these events are defined. Korotkov sound data were recorded from five normotensive subjects during orthostatic stress (lower body negative pressure) and bicycle ergometry. A spectral analysis of the seven Korotkov sounds centered about the systolic and diastolic auscultatory events revealed that a maximum increase in amplitude at the systolic transition occurred in the 18-26-Hz band, while a maximum decrease in amplitude at the diastolic transition occurred in the 40-60-Hz band. These findings were remarkably consistent across subjects and test conditions. These passbands are included in the design specifications for an automatic blood pressure measuring system used in conjuction with medical experiments during NASA's Skylab program.
Ntoumenopoulos, G; Glickman, Y
2012-09-01
To explore the feasibility of computerised lung sound monitoring to evaluate secretion removal in intubated and mechanically ventilated adult patients. Before and after observational investigation. Intensive care unit. Fifteen intubated and mechanically ventilated adult patients receiving chest physiotherapy. Chest physiotherapy included combinations of standard closed airway suctioning, saline lavage, postural drainage, chest wall vibrations, manual-assisted cough and/or lung hyperinflation, dependent upon clinical indications. Lung sound amplitude at peak inspiration was assessed using computerised lung sound monitoring. Measurements were performed immediately before and after chest physiotherapy. Data are reported as mean [standard deviation (SD)], mean difference and 95% confidence intervals (CI). Significance testing was not performed due to the small sample size and the exploratory nature of the study. Fifteen patients were included in the study [11 males, four females, mean age 65 (SD 14) years]. The mean total lung sound amplitude at peak inspiration decreased two-fold from 38 (SD 59) units before treatment to 17 (SD 19) units after treatment (mean difference 22, 95% CI of difference -3 to 46). The mean total lung sound amplitude from the lungs of patients with a large amount of secretions (n=9) was over four times 'louder' than the lungs of patients with a moderate or small amount of secretions (n=6) [56 (SD 72) units vs 12 (13) units, respectively; mean difference -44, 95% CI of difference -100 to 11]. The mean total lung sound amplitude decreased in the group of 'loud' right and left lungs (n=15) from 37 (SD 36) units before treatment to 15 (SD 13) units after treatment (mean difference 22, 95% CI of difference 6 to 38). Computerised lung sound monitoring in this small group of patients demonstrated a two-fold decrease in lung sound amplitude following chest physiotherapy. Subgroup analysis also demonstrated decreasing trends in lung sound amplitude in the group of 'loud' lungs following chest physiotherapy. Due to the small sample size and large SDs with high variability in the lung sound amplitude measurements, significance testing was not reported. Further investigation is needed in a larger sample of patients with more accurate measurement of sputum wet weight in order to distinguish between secretion-related effects and changes due to other factors such as airflow rate and pattern. Copyright © 2012 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Research on Antiphonic Characteristic of AlMg10-SiC Ultralight Composite Materials
NASA Astrophysics Data System (ADS)
Rusu, O.; Rusu, I.
2018-06-01
The paper presents the results on the absorption sound testing of an ultralight cellular composite material AlMg10-SiC, obtained by sputtering method. We have chosen this type of material because its microstructure generally comprises open cells (and relatively few semi-open cells), evenly distributed in the material, a structure that, at least theoretically, has a favorable behavior in relation to sound damping. The tests were performed on three types of samples, namely P11 – AlMg10 – 5%SiC, P12 – AlMg10 – 10%SiC şi P13 – AlMg10 – 15%SiC. The 15% SiC (P13) cellular material sample has the best sound-absorbing characteristics and the highest practical absorption degree.
Georgoulas, George; Georgopoulos, Voula C; Stylios, Chrysostomos D
2006-01-01
This paper proposes a novel integrated methodology to extract features and classify speech sounds with intent to detect the possible existence of a speech articulation disorder in a speaker. Articulation, in effect, is the specific and characteristic way that an individual produces the speech sounds. A methodology to process the speech signal, extract features and finally classify the signal and detect articulation problems in a speaker is presented. The use of support vector machines (SVMs), for the classification of speech sounds and detection of articulation disorders is introduced. The proposed method is implemented on a data set where different sets of features and different schemes of SVMs are tested leading to satisfactory performance.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, SShao-sheng R.; Allen, Christopher S.
2009-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.
Development of the Astrobee F sounding rocket system.
NASA Technical Reports Server (NTRS)
Jenkins, R. B.; Taylor, J. P.; Honecker, H. J., Jr.
1973-01-01
The development of the Astrobee F sounding rocket vehicle through the first flight test at NASA-Wallops Station is described. Design and development of a 15 in. diameter, dual thrust, solid propellant motor demonstrating several new technology features provided the basis for the flight vehicle. The 'F' motor test program described demonstrated the following advanced propulsion technology: tandem dual grain configuration, low burning rate HTPB case-bonded propellant, and molded plastic nozzle. The resultant motor integrated into a flight vehicle was successfully flown with extensive diagnostic instrumentation.-
NASA Technical Reports Server (NTRS)
Thomann, G. C.
1973-01-01
Experiments to remotely determine sea water salinity from measurements of the sea surface radiometric temperature over the Mississippi Sound were conducted. The line was flown six times at an altitude of 244 meters. The radiometric temperature of the sea surface was measured in two spectral intervals. The specifications of the equipment and the conditions under which the tests were conducted are described. Results of the tests are presented in the form of graphs.
Fleury, Sylvain; Jamet, Éric; Roussarie, Vincent; Bosc, Laure; Chamard, Jean-Christophe
2016-12-01
Virtually silent electric vehicles (EVs) may pose a risk for pedestrians. This paper describes two studies that were conducted to assess the influence of different types of external sounds on EV detectability. In the first study, blindfolded participants had to detect an approaching EV with either no warning sounds at all or one of three types of sound we tested. In the second study, designed to replicate the results of the first one in an ecological setting, the EV was driven along a road and the experimenters counted the number of people who turned their heads in its direction. Results of the first study showed that adding external sounds improve EV detection, and modulating the frequency and increasing the pitch of these sounds makes them more effective. This improvement was confirmed in the ecological context. Consequently, pitch variation and frequency modulation should both be taken into account in future AVAS design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Different Timescales for the Neural Coding of Consonant and Vowel Sounds
Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.
2013-01-01
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334
Topal, Taner; Polat, Hüseyin; Güler, Inan
2008-10-01
In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.
Computer-aided auscultation learning system for nursing technique instruction.
Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih
2008-01-01
Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.
Brain responses to sound intensity changes dissociate depressed participants and healthy controls.
Ruohonen, Elisa M; Astikainen, Piia
2017-07-01
Depression is associated with bias in emotional information processing, but less is known about the processing of neutral sensory stimuli. Of particular interest is processing of sound intensity which is suggested to indicate central serotonergic function. We tested weather event-related brain potentials (ERPs) to occasional changes in sound intensity can dissociate first-episode depressed, recurrent depressed and healthy control participants. The first-episode depressed showed larger N1 amplitude to deviant sounds compared to recurrent depression group and control participants. In addition, both depression groups, but not the control group, showed larger N1 amplitude to deviant than standard sounds. Whether these manifestations of sensory over-excitability in depression are directly related to the serotonergic neurotransmission requires further research. The method based on ERPs to sound intensity change is fast and low-cost way to objectively measure brain activation and holds promise as a future diagnostic tool. Copyright © 2017 Elsevier B.V. All rights reserved.
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, W.J.; Greene, C.R.; Koski, W.R.
1991-10-01
The report concerns the effects of underwater noise from simulated oil production operations on the movements and behavior of bowhead and white whales migrating around northern Alaska in spring. An underwater sound projector suspended from pack ice was used to introduce recorded drilling noise and other test sounds into leads through the pack ice. These sounds were received and measured at various distances to determine the rate of sound attenuation with distance and frequency. The movements and behavior of bowhead and white whales approaching the operating projector were studied by aircraft- and ice-based observers. Some individuals of both species weremore » observed to approach well within the ensonified area. However, behavioral changes and avoidance reactions were evident when the received sound level became sufficiently high. Reactions to aircraft are also discussed.« less
Headphone screening to facilitate web-based auditory experiments
Woods, Kevin J.P.; Siegel, Max; Traer, James; McDermott, Josh H.
2017-01-01
Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants, but sacrifice control over sound presentation, and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining if online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing. PMID:28695541
Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra
2016-03-01
Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.
Bizley, Jennifer K; Walker, Kerry M M; King, Andrew J; Schnupp, Jan W H
2013-01-01
Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.
Participation of the Classical Speech Areas in Auditory Long-Term Memory
Karabanov, Anke Ninija; Paine, Rainer; Chao, Chi Chao; Schulze, Katrin; Scott, Brian; Hallett, Mark; Mishkin, Mortimer
2015-01-01
Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results. PMID:25815813
Participation of the classical speech areas in auditory long-term memory.
Karabanov, Anke Ninija; Paine, Rainer; Chao, Chi Chao; Schulze, Katrin; Scott, Brian; Hallett, Mark; Mishkin, Mortimer
2015-01-01
Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.
Claes, Raf; Dirckx, Joris J. J.
2017-01-01
Because the quadrate and the eardrum are connected, the hypothesis was tested that birds attenuate the transmission of sound through their ears by opening the bill, which potentially serves as an additional protective mechanism for self-generated vocalizations. In domestic chickens, it was examined if a difference exists between hens and roosters, given the difference in vocalization capacity between the sexes. To test the hypothesis, vibrations of the columellar footplate were measured ex vivo with laser Doppler vibrometry (LDV) for closed and maximally opened beak conditions, with sounds introduced at the ear canal. The average attenuation was 3.5 dB in roosters and only 0.5 dB in hens. To demonstrate the importance of a putative protective mechanism, audio recordings were performed of a crowing rooster. Sound pressures levels of 133.5 dB were recorded near the ears. The frequency content of the vocalizations was in accordance with the range of highest hearing sensitivity in chickens. The results indicate a small but significant difference in sound attenuation between hens and roosters. However, the amount of attenuation as measured in the experiments on both hens and roosters is small and will provide little effective protection in addition to other mechanisms such as stapedius muscle activity. PMID:29291112
Bizley, Jennifer K; Walker, Kerry MM; King, Andrew J; Schnupp, Jan WH
2013-01-01
Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/, and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners. PMID:23297909
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)
2000-01-01
In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).
Measurement of the resistivity of porous materials with an alternating air-flow method.
Dragonetti, Raffaele; Ianniello, Carmine; Romano, Rosario A
2011-02-01
Air-flow resistivity is a main parameter governing the acoustic behavior of porous materials for sound absorption. The international standard ISO 9053 specifies two different methods to measure the air-flow resistivity, namely a steady-state air-flow method and an alternating air-flow method. The latter is realized by the measurement of the sound pressure at 2 Hz in a small rigid volume closed partially by the test sample. This cavity is excited with a known volume-velocity sound source implemented often with a motor-driven piston oscillating with prescribed area and displacement magnitude. Measurements at 2 Hz require special instrumentation and care. The authors suggest an alternating air-flow method based on the ratio of sound pressures measured at frequencies higher than 2 Hz inside two cavities coupled through a conventional loudspeaker. The basic method showed that the imaginary part of the sound pressure ratio is useful for the evaluation of the air-flow resistance. Criteria are discussed about the choice of a frequency range suitable to perform simplified calculations with respect to the basic method. These criteria depend on the sample thickness, its nonacoustic parameters, and the measurement apparatus as well. The proposed measurement method was tested successfully with various types of acoustic materials.
Developing a weighted measure of speech sound accuracy.
Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J
2011-02-01
To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.
Echoes of the spoken past: how auditory cortex hears context during speech perception
Skipper, Jeremy I.
2014-01-01
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds. PMID:25092665
Maps and documentation of seismic CPT soundings in the central, eastern, and western United States
Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.
2010-01-01
Nine hundred twenty seven seismic cone penetration tests (CPT) in a variety of geologic deposits and geographic locations were conducted by the U.S. Geological Survey (USGS) primarily between 1998 and 2008 for the purpose of collecting penetration test data to evaluate the liquefaction potential of different types of surficial geologic deposits (table 1). The evaluation is described in Holzer and others (in press). This open-file report summarizes the seismic CPT and geotechnical data that were collected for the evaluation, outlines the general conditions under which the data were acquired, and briefly describes the geographic location of each study area and local geologic conditions. This report also describes the field methods used to obtain the seismic CPT data and summarizes the results of shear-wave velocities measurements at 2-m intervals in each sounding. Although the average depth of the 927 soundings was 18.5 m, we estimated a time-averaged shear-wave velocity to depths of 20 m and 30 m, VS20 and VS30, respectively, for soundings deeper than 10 m and 20 m. Soil sampling also was selectively conducted in many of the study areas at representative seismic CPT soundings. These data are described and laboratory analyses of geotechnical properties of these samples are summarized in table 2.
Sidhu, David M; Pexman, Penny M; Saint-Aubin, Jean
2016-09-01
Although it is often assumed that language involves an arbitrary relationship between form and meaning, many studies have demonstrated that nonwords like maluma are associated with round shapes, while nonwords like takete are associated with sharp shapes (i.e., the Maluma/Takete effect, Köhler, 1929/1947). The majority of the research on sound symbolism has used nonwords, but Sidhu and Pexman (2015) recently extended this effect to existing labels: real English first names (i.e., the Bob/Kirk effect). In the present research we tested whether the effects of name sound symbolism generalize to French speakers (Experiment 1) and French names (Experiment 2). In addition, we assessed the underlying mechanism of name sound symbolism, investigating the roles of phonology and orthography in the effect. Results showed that name sound symbolism does generalize to French speakers and French names. Further, this robust effect remained the same when names were presented in a curved vs. angular font (Experiment 3), or when the salience of orthographic information was reduced through auditory presentation (Experiment 4). Together these results suggest that the Bob/Kirk effect is pervasive, and that it is based on fundamental features of name phonemes. Copyright © 2016 Elsevier B.V. All rights reserved.
Koda, Hiroki; Basile, Muriel; Olivier, Marion; Remeuf, Kevin; Nagumo, Sumiharu; Blois-Heulin, Catherine; Lemasson, Alban
2013-08-01
The central position and universality of music in human societies raises the question of its phylogenetic origin. One of the most important properties of music involves harmonic musical intervals, in response to which humans show a spontaneous preference for consonant over dissonant sounds starting from early human infancy. Comparative studies conducted with organisms at different levels of the primate lineage are needed to understand the evolutionary scenario under which this phenomenon emerged. Although previous research found no preference for consonance in a New World monkey species, the question remained opened for Old World monkeys. We used an experimental paradigm based on a sensory reinforcement procedure to test auditory preferences for consonant sounds in Campbell's monkeys (Cercopithecus campbelli campbelli), an Old World monkey species. Although a systematic preference for soft (70 dB) over loud (90 dB) control white noise was found, Campbell's monkeys showed no preference for either consonant or dissonant sounds. The preference for soft white noise validates our noninvasive experimental paradigm, which can be easily reused in any captive facility to test for auditory preferences. This would suggest that human preference for consonant sounds is not systematically shared with New and Old World monkeys. The sensitivity for harmonic musical intervals emerged probably very late in the primate lineage.
Cleveland, Laverne; Little, Edward E.; Petty, Jimmie D.; Johnson, B. Thomas; Lebo, Jon A.; Orazio, Carl E.; Dionne, Jane
1997-01-01
Eight whole sediment samples from Antarctica (four from Winter Quarters Bay and four from McMurdo Sound) were toxicologically and chemically evaluated. Also, the influence of ultraviolet radiation on the toxicity and bioavailability of contaminants associated with the sediment samples was assessed. The evaluations were accomplished by use of a 10-day whole sediment test with Leptocheirus plumulosus, Microtox®, Mutatox® and semipermeable membrane devices (SPMDs). Winter Quarters Bay sediments contained about 250 ng g−1 (dry weight) total PCBs and 20 μg g−1 total PAHs. These sediments elicited toxicity in the Microtox test and avoidance and inhibited burrowing in the L. plumulosus test. The McMurdo Sound sediment samples contained only trace amounts of PCBs and no PAHs, and were less toxic in both the L. plumulosus and Microtox tests compared to the Winter Quarters Bay sediments. The sediments from McMurdo Sound apparently contained some unidentified substance which was photolytically modified to a more toxic form. The photolytic modification of sediment-associated contaminants, coupled with the polar ozone hole and increased incidence of ultraviolet radiation could significantly increase hazards to Antarctic marine life.
Use of tracheal auscultation for the assessment of bronchial responsiveness in asthmatic children.
Sprikkelman, A. B.; Grol, M. H.; Lourens, M. S.; Gerritsen, J.; Heymans, H. S.; van Aalderen, W. M.
1996-01-01
BACKGROUND: It can be difficult to assess bronchial responsiveness in children because of their inability to perform spirometric tests reliably. In bronchial challenges lung sounds could be used to detect the required 20% fall in the forced expiratory volume in one second (FEV1). A study was undertaken to determine whether a change in lung sounds corresponded with a 20% fall in FEV1 after methacholine challenge, and whether the occurrence of wheeze was the most important change. METHODS: Fifteen children with asthma (eight boys) of mean age 10.8 years (range 8-15) were studied. All had normal chest auscultation before the methacholine challenge test. Lung sounds were recorded over the trachea for one minute and stored on tape. They were analysed directly and also scored blindly from the tape recording by a second investigator. Wheeze, cough, increase in respiratory rate, and prolonged expiration were assessed. RESULTS: The total cumulative methacholine dose causing a fall in FEV1 of 20% or more (PD20) was detected in 12 children by a change in lung sounds - in four by wheeze and in eight by cough, increased respiratory rate, and/or prolonged expiration. In two subjects altered lung sounds were detectable one dose step before PD20 was reached. In three cases in whom no fall in FEV1 occurred, no change in lung sounds could be detected at the highest methacholine dose. CONCLUSION: Changes in lung sounds correspond well with a 20% fall in FEV1 after methacholine challenge. Wheeze is an insensitive indicator for assessing bronchial responsiveness. Cough, increase in respiratory rate, and prolonged expiration occurs more frequently. PMID:8779140
Rossi, Tullio; Connell, Sean D; Nagelkerken, Ivan
2016-03-16
Soundscapes are multidimensional spaces that carry meaningful information for many species about the location and quality of nearby and distant resources. Because soundscapes are the sum of the acoustic signals produced by individual organisms and their interactions, they can be used as a proxy for the condition of whole ecosystems and their occupants. Ocean acidification resulting from anthropogenic CO2 emissions is known to have profound effects on marine life. However, despite the increasingly recognized ecological importance of soundscapes, there is no empirical test of whether ocean acidification can affect biological sound production. Using field recordings obtained from three geographically separated natural CO2 vents, we show that forecasted end-of-century ocean acidification conditions can profoundly reduce the biological sound level and frequency of snapping shrimp snaps. Snapping shrimp were among the noisiest marine organisms and the suppression of their sound production at vents was responsible for the vast majority of the soundscape alteration observed. To assess mechanisms that could account for these observations, we tested whether long-term exposure (two to three months) to elevated CO2 induced a similar reduction in the snapping behaviour (loudness and frequency) of snapping shrimp. The results indicated that the soniferous behaviour of these animals was substantially reduced in both frequency (snaps per minute) and sound level of snaps produced. As coastal marine soundscapes are dominated by biological sounds produced by snapping shrimp, the observed suppression of this component of soundscapes could have important and possibly pervasive ecological consequences for organisms that use soundscapes as a source of information. This trend towards silence could be of particular importance for those species whose larval stages use sound for orientation towards settlement habitats. © 2016 The Author(s).
Rossi, Tullio; Connell, Sean D.; Nagelkerken, Ivan
2016-01-01
Soundscapes are multidimensional spaces that carry meaningful information for many species about the location and quality of nearby and distant resources. Because soundscapes are the sum of the acoustic signals produced by individual organisms and their interactions, they can be used as a proxy for the condition of whole ecosystems and their occupants. Ocean acidification resulting from anthropogenic CO2 emissions is known to have profound effects on marine life. However, despite the increasingly recognized ecological importance of soundscapes, there is no empirical test of whether ocean acidification can affect biological sound production. Using field recordings obtained from three geographically separated natural CO2 vents, we show that forecasted end-of-century ocean acidification conditions can profoundly reduce the biological sound level and frequency of snapping shrimp snaps. Snapping shrimp were among the noisiest marine organisms and the suppression of their sound production at vents was responsible for the vast majority of the soundscape alteration observed. To assess mechanisms that could account for these observations, we tested whether long-term exposure (two to three months) to elevated CO2 induced a similar reduction in the snapping behaviour (loudness and frequency) of snapping shrimp. The results indicated that the soniferous behaviour of these animals was substantially reduced in both frequency (snaps per minute) and sound level of snaps produced. As coastal marine soundscapes are dominated by biological sounds produced by snapping shrimp, the observed suppression of this component of soundscapes could have important and possibly pervasive ecological consequences for organisms that use soundscapes as a source of information. This trend towards silence could be of particular importance for those species whose larval stages use sound for orientation towards settlement habitats. PMID:26984624
On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common.
Weninger, Felix; Eyben, Florian; Schuller, Björn W; Mortillaro, Marcello; Scherer, Klaus R
2013-01-01
WITHOUT DOUBT, THERE IS EMOTIONAL INFORMATION IN ALMOST ANY KIND OF SOUND RECEIVED BY HUMANS EVERY DAY: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow's pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of "the sound that something makes," in order to evaluate the system's auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.
A noise assessment and prediction system
NASA Technical Reports Server (NTRS)
Olsen, Robert O.; Noble, John M.
1990-01-01
A system has been designed to provide an assessment of noise levels that result from testing activities at Aberdeen Proving Ground, Md. The system receives meteorological data from surface stations and an upper air sounding system. The data from these systems are sent to a meteorological model, which provides forecasting conditions for up to three hours from the test time. The meteorological data are then used as input into an acoustic ray trace model which projects sound level contours onto a two-dimensional display of the surrounding area. This information is sent to the meteorological office for verification, as well as the range control office, and the environmental office. To evaluate the noise level predictions, a series of microphones are located off the reservation to receive the sound and transmit this information back to the central display unit. The computer models are modular allowing for a variety of models to be utilized and tested to achieve the best agreement with data. This technique of prediction and model validation will be used to improve the noise assessment system.
Mehmood, Mansoor; Abu Grara, Hazem L; Stewart, Joshua S; Khasawneh, Faisal A
2014-01-01
Background It is considered standard practice to use disposable or patient-dedicated stethoscopes to prevent cross-contamination between patients in contact precautions and others in their vicinity. The literature offers very little information regarding the quality of currently used stethoscopes. This study assessed the fidelity with which acoustics were perceived by a broad range of health care professionals using three brands of stethoscopes. Methods This prospective study used a simulation center and volunteer health care professionals to test the sound quality offered by three brands of commonly used stethoscopes. The volunteer’s proficiency in identifying five basic ausculatory sounds (wheezing, stridor, crackles, holosystolic murmur, and hyperdynamic bowel sounds) was tested, as well. Results A total of 84 health care professionals (ten attending physicians, 35 resident physicians, and 39 intensive care unit [ICU] nurses) participated in the study. The higher-end stethoscope was more reliable than lower-end stethoscopes in facilitating the diagnosis of the auscultatory sounds, especially stridor and crackles. Our volunteers detected all tested sounds correctly in about 69% of cases. As expected, attending physicians performed the best, followed by resident physicians and subsequently ICU nurses. Neither years of experience nor background noise seemed to affect performance. Postgraduate training continues to offer very little to improve our trainees’ auscultation skills. Conclusion The results of this study indicate that using low-end stethoscopes to care for patients in contact precautions could compromise identifying important auscultatory findings. Furthermore, there continues to be an opportunity to improve our physicians and ICU nurses’ auscultation skills. PMID:25152636
Daschewski, M; Kreutzbruck, M; Prager, J
2015-12-01
In this work we experimentally verify the theoretical prediction of the recently published Energy Density Fluctuation Model (EDF-model) of thermo-acoustic sound generation. Particularly, we investigate experimentally the influence of thermal inertia of an electrically conductive film on the efficiency of thermal airborne ultrasound generation predicted by the EDF-model. Unlike widely used theories, the EDF-model predicts that the thermal inertia of the electrically conductive film is a frequency-dependent parameter. Its influence grows non-linearly with the increase of excitation frequency and reduces the efficiency of the ultrasound generation. Thus, this parameter is the major limiting factor for the efficient thermal airborne ultrasound generation in the MHz-range. To verify this theoretical prediction experimentally, five thermo-acoustic emitter samples consisting of Indium-Tin-Oxide (ITO) coatings of different thicknesses (from 65 nm to 1.44 μm) on quartz glass substrates were tested for airborne ultrasound generation in a frequency range from 10 kHz to 800 kHz. For the measurement of thermally generated sound pressures a laser Doppler vibrometer combined with a 12 μm thin polyethylene foil was used as the sound pressure detector. All tested thermo-acoustic emitter samples showed a resonance-free frequency response in the entire tested frequency range. The thermal inertia of the heat producing film acts as a low-pass filter and reduces the generated sound pressure with the increasing excitation frequency and the ITO film thickness. The difference of generated sound pressure levels for samples with 65 nm and 1.44 μm thickness is in the order of about 6 dB at 50 kHz and of about 12 dB at 500 kHz. A comparison of sound pressure levels measured experimentally and those predicted by the EDF-model shows for all tested emitter samples a relative error of less than ±6%. Thus, experimental results confirm the prediction of the EDF-model and show that the model can be applied for design and optimization of thermo-acoustic airborne ultrasound emitters. Copyright © 2015 Elsevier B.V. All rights reserved.
Maruska, Karen P; Ung, Uyhun S; Fernald, Russell D
2012-01-01
Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2-5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes.
Maruska, Karen P.; Ung, Uyhun S.; Fernald, Russell D.
2012-01-01
Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2–5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes. PMID:22624055
Sounding the Alert: Designing an Effective Voice for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Burkett, E. R.; Given, D. D.
2015-12-01
The USGS is working with partners to develop the ShakeAlert Earthquake Early Warning (EEW) system (http://pubs.usgs.gov/fs/2014/3083/) to protect life and property along the U.S. West Coast, where the highest national seismic hazard is concentrated. EEW sends an alert that shaking from an earthquake is on its way (in seconds to tens of seconds) to allow recipients or automated systems to take appropriate actions at their location to protect themselves and/or sensitive equipment. ShakeAlert is transitioning toward a production prototype phase in which test users might begin testing applications of the technology. While a subset of uses will be automated (e.g., opening fire house doors), other applications will alert individuals by radio or cellphone notifications and require behavioral decisions to protect themselves (e.g., "Drop, Cover, Hold On"). The project needs to select and move forward with a consistent alert sound to be widely and quickly recognized as an earthquake alert. In this study we combine EEW science and capabilities with an understanding of human behavior from the social and psychological sciences to provide insight toward the design of effective sounds to help best motivate proper action by alert recipients. We present a review of existing research and literature, compiled as considerations and recommendations for alert sound characteristics optimized for EEW. We do not yet address wording of an audible message about the earthquake (e.g., intensity and timing until arrival of shaking or possible actions), although it will be a future component to accompany the sound. We consider pitch(es), loudness, rhythm, tempo, duration, and harmony. Important behavioral responses to sound to take into account include that people respond to discordant sounds with anxiety, can be calmed by harmony and softness, and are innately alerted by loud and abrupt sounds, although levels high enough to be auditory stressors can negatively impact human judgment.
Development of an Experimental Rig for Investigation of Higher Order Modes in Ducts
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Cabell, Randolph H.; Brown, Martha C.
2006-01-01
Continued progress to reduce fan noise emission from high bypass ratio engine ducts in aircraft increasingly relies on accurate description of the sound propagation in the duct. A project has been undertaken at NASA Langley Research Center to investigate the propagation of higher order modes in ducts with flow. This is a two-pronged approach, including development of analytic models (the subject of a separate paper) and installation of a laboratory-quality test rig. The purposes of the rig are to validate the analytical models and to evaluate novel duct acoustic liner concepts, both passive and active. The dimensions of the experimental rig test section scale to between 25% and 50% of the aft bypass ducts of most modern engines. The duct is of rectangular cross section so as to provide flexibility to design and fabricate test duct liner samples. The test section can accommodate flow paths that are straight through or offset from inlet to discharge, the latter design allowing investigation of the effect of curvature on sound propagation and duct liner performance. The maximum air flow rate through the duct is Mach 0.3. Sound in the duct is generated by an array of 16 high-intensity acoustic drivers. The signals to the loudspeaker array are generated by a multi-input/multi-output feedforward control system that has been developed for this project. The sound is sampled by arrays of flush-mounted microphones and a modal decomposition is performed at the frequency of sound generation. The data acquisition system consists of two arrays of flush-mounted microphones, one upstream of the test section and one downstream. The data are used to determine parameters such as the overall insertion loss of the test section treatment as well as the effect of the treatment on a modal basis such as mode scattering. The methodology used for modal decomposition is described, as is a description of the mode generation control system. Data are presented which demonstrate the performance of the controller to generate the desired mode while suppressing all other cut on modes in the duct.
Taiwanese middle school students' materialistic concepts of sound
NASA Astrophysics Data System (ADS)
Eshach, Haim; Lin, Tzu-Chiang; Tsai, Chin-Chung
2016-06-01
This study investigated if and to what extent grade 8 and 9 students in Taiwan attributed materialistic properties to sound concepts, and whether they hold scientific views in parallel with materialistic views. Taiwanese middle school students are a special population since their scores in international academic comparison tests such as TIMSS and PISA are among the highest in the world. The "Sound Concept Inventory Instrument" with both materialistic and scientific statements of sound concepts was applied to explore Taiwanese students' ideas and corresponding confidence. The results showed that although the subject of sound is taught extensively in grade 8 in Taiwan, students still hold materialistic views of sound. The participants agreed, on average, with 41% of the statements that associate sound with materialistic properties. Moreover, they were quite confident in their materialistic answers (mean=3.27 on a 5-point Likert scale). In parallel, they also agreed with 71% of the scientific statements in the questions. They were also confident of their scientific answers (mean=3.21 ). As for the difference between grade 8 and 9 students, it seems that in grade 9, when students do not learn about sound, there is a kind of regression to a more materialistic view of sound. The girls performed better than the boys (t =3.59 , p <0. 001 ). The paper uses Vosniadou and Brewer's [Cogn. Sci. 18, 123 (1994)., 10.1207/s15516709cog1801_4] framework theory to explain the results, and suggests some ideas for improving the teaching of sound.
Störmer, Viola; Feng, Wenfeng; Martinez, Antigona; McDonald, John; Hillyard, Steven
2016-03-01
Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194-9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10-14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240-400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.
Musical Sound, Instruments, and Equipment
NASA Astrophysics Data System (ADS)
Photinos, Panos
2017-12-01
'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.
Neilans, Erikson G; Dent, Micheal L
2015-02-01
Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and partially overlapping pure tones using the same psychophysical procedures. Species differences were found between budgerigars and humans in how partially overlapping sounds were perceived, with budgerigars more likely to segregate overlapping sounds and humans more apt to fuse the 2 sounds together. The results also illustrated that temporal cues are particularly important for stream segregation of overlapping sounds. Lastly, budgerigars were found to segregate partially overlapping sounds in a manner predicted by computational models of streaming, whereas humans were not. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
2017-02-04
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
NASA Technical Reports Server (NTRS)
Gray, R. B.; Pierce, G. A.
1972-01-01
Wind tunnel tests were performed on two oscillating two-dimensional lifting surfaces. The first of these models had an NACA 0012 airfoil section while the second simulated the classical flat plate. Both of these models had a mean angle of attack of 12 degrees while being oscillated in pitch about their midchord with a double amplitude of 6 degrees. Wake surveys of sound pressure level were made over a frequency range from 16 to 32 Hz and at various free stream velocities up to 100 ft/sec. The sound pressure level spectrum indicated significant peaks in sound intensity at the oscillation frequency and its first harmonic near the wake of both models. From a comparison of these data with that of a sound level meter, it is concluded that most of the sound intensity is contained within these peaks and no appreciable peaks occur at higher harmonics. It is concluded that within the wake the sound intensity is largely pseudosound while at one chord length outside the wake, it is largely true vortex sound. For both the airfoil and flat plate the peaks appear to be more strongly dependent upon the airspeed than on the oscillation frequency. Therefore reduced frequency does not appear to be a significant parameter in the generation of wake sound intensity.
Prospective cohort study on noise levels in a pediatric cardiac intensive care unit.
Garcia Guerra, Gonzalo; Joffe, Ari R; Sheppard, Cathy; Pugh, Jodie; Moez, Elham Khodayari; Dinu, Irina A; Jou, Hsing; Hartling, Lisa; Vohra, Sunita
2018-04-01
To describe noise levels in a pediatric cardiac intensive care unit, and to determine the relationship between sound levels and patient sedation requirements. Prospective observational study at a pediatric cardiac intensive care unit (PCICU). Sound levels were measured continuously in slow A weighted decibels dB(A) with a sound level meter SoundEarPro® during a 4-week period. Sedation requirement was assessed using the number of intermittent (PRNs) doses given per hour. Analysis was conducted with autoregressive moving average models and the Granger test for causality. 39 children were included in the study. The average (SD) sound level in the open area was 59.4 (2.5) dB(A) with a statistically significant but clinically unimportant difference between day/night hours (60.1 vs. 58.6; p-value < 0.001). There was no significant difference between sound levels in the open area/single room (59.4 vs. 60.8, p-value = 0.108). Peak noise levels were > 90 dB. There was a significant association between average (p-value = 0.030) and peak sound levels (p-value = 0.006), and number of sedation PRNs. Sound levels were above the recommended values with no differences between day/night or open area/single room. High sound levels were significantly associated with sedation requirements. Copyright © 2017 Elsevier Inc. All rights reserved.
L-type calcium channels refine the neural population code of sound level.
Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana
2016-12-01
The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.
The Impact of Sound-Field Systems on Learning and Attention in Elementary School Classrooms
ERIC Educational Resources Information Center
Dockrell, Julie E.; Shield, Bridget
2012-01-01
Purpose: The authors evaluated the installation and use of sound-field systems to investigate the impact of these systems on teaching and learning in elementary school classrooms. Methods: The evaluation included acoustic surveys of classrooms, questionnaire surveys of students and teachers, and experimental testing of students with and without…
GRAPHEME-PHONEME REGULARITY AND ITS EFFECTS ON EARLY READING--A PILOT STUDY.
ERIC Educational Resources Information Center
FRANKENSTEIN, ROSELYN; KJELDERGAARD, PAUL M.
A PILOT EXPERIMENT CONDUCTED TO TEST THE EFFECT OF A SPECIALLY DEVISED PHONIC APPROACH TO EARLY READING IS DESCRIBED. THE PHONIC METHOD USED ACHIEVED SOUND-SYMBOL REGULARITY AND HAD THE FOLLOWING CHARACTERISTICS--(1) CONSONANT GRAPHEMES EACH REPRESENTED ONLY ONE SOUND AND WERE PRINTED USING NEARLY STANDARD ALPHABETIC SYMBOLS. (2) EACH VOWEL…
Bistatic Soundings with the HF GPR TAPIR in the Egyptian White Desert
NASA Astrophysics Data System (ADS)
Ciarletti, V.; Le Gall, A.; Berthelier, J. J.; Corbel, C.; Dolon, F.; Ney, R.
2006-03-01
The TAPIR HF GPR has been initially developed to perform deep soundings on Mars in the frame of the NETLANDER mission. In November 2006, an updated version of the instrument working either in monostatic or in bistatic mode was tested in the Egytian White Desert. Preliminary results are presented.
Taiwanese Middle School Students' Materialistic Concepts of Sound
ERIC Educational Resources Information Center
Eshach, Haim; Lin, Tzu-Chiang; Tsai, Chin-Chung
2016-01-01
This study investigated if and to what extent grade 8 and 9 students in Taiwan attributed materialistic properties to sound concepts, and whether they hold scientific views in parallel with materialistic views. Taiwanese middle school students are a special population since their scores in international academic comparison tests such as TIMSS and…
ERIC Educational Resources Information Center
Dietrich, Susanne; Hertrich, Ingo; Riedel, Andreas; Ackermann, Hermann
2012-01-01
The Asperger syndrome (AS) includes impaired recognition of other people's mental states. Since language-based diagnostic procedures may be confounded by cognitive-linguistic compensation strategies, nonverbal test materials were created, including human affective and vegetative sounds. Depending on video context, each sound could be interpreted…
NASA Technical Reports Server (NTRS)
Harvey, W. D.
1975-01-01
Results are presented of a coordinated experimental and theoretical study of a sound shield concept which aims to provide a means of noise reduction in the test section of supersonic wind tunnels at high Reynolds numbers. The model used consists of a planar array of circular rods aligned with the flow, with adjustable gaps between them for boundary layer removal by suction, i.e., laminar flow control. One of the basic requirements of the present sound shield concept is to achieve sonic cross flow through the gaps in order to prevent lee-side flow disturbances from penetrating back into the shielded region. Tests were conducted at Mach 6 over a local unit Reynolds number range from about 1.2 x 10 to the 6th power to 13.5 x 10 to the 6th power per foot. Measurements of heat transfer, static pressure, and sound levels were made to establish the transition characteristics of the boundary layer on the rod array and the sound shielding effectiveness.
Prevention of railway trespassing by automatic sound warning-A pilot study.
Kallberg, Veli-Pekka; Silla, Anne
2017-04-03
The objective of this study was to investigate the effects of a sound warning system on the frequency of trespassing at 2 pilot test sites in Finland. The effect of automatic prerecorded sound warning on the prevention of railway trespassing was evaluated based on observations at 2 test sites in Finland. At both sites an illegal footpath crossed the railway, and the average daily number of trespassers before implementation of the measures was about 18 at both sites. The results showed that trespassing was reduced at these sites by 18 and 44%, respectively. Because of the lack of proper control sites, it is possible that the real effects of the measure are somewhat smaller. The current study concludes that automatic sound warning may be efficient and cost effective at locations where fencing is not a viable option. However, it is not likely to be a cost-effective panacea for all kinds of sites where trespassing occurs, especially in countries like Finland where trespassing is scattered along the railway network rather than concentrated to a limited number of sites.
Hutter, E; Grapp, M; Argstatter, H
2016-12-01
People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.
Dynamic measurement of speed of sound in n-Heptane by ultrasonics during fuel injections.
Minnetti, Elisa; Pandarese, Giuseppe; Evangelisti, Piersavio; Verdugo, Francisco Rodriguez; Ungaro, Carmine; Bastari, Alessandro; Paone, Nicola
2017-11-01
The paper presents a technique to measure the speed of sound in fuels based on pulse-echo ultrasound. The method is applied inside the test chamber of a Zeuch-type instrument used for indirect measurement of the injection rate (Mexus). The paper outlines the pulse-echo method, considering probe installation, ultrasound beam propagation inside the test chamber, typical signals obtained, as well as different processing algorithms. The method is validated in static conditions by comparing the experimental results to the NIST database both for water and n-Heptane. The ultrasonic system is synchronized to the injector so that time resolved samples of speed of sound can be successfully acquired during a series of injections. Results at different operating conditions in n-Heptane are shown. An uncertainty analysis supports the analysis of results and allows to validate the method. Experimental results show that the speed of sound variation during an injection event is less than 1%, so the Mexus model assumption to consider it constant during the injection is valid. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Toma, Eiji
2018-06-01
In recent years, as the weight of IT equipment has been reduced, the demand for motor fans for cooling the interior of electronic equipment is on the rise. Sensory test technique by inspectors is the mainstream for quality inspection of motor fans in the field. This sensory test requires a lot of experience to accurately diagnose differences in subtle sounds (sound pressures) of the fans, and the judgment varies depending on the condition of the inspector and the environment. In order to solve these quality problems, development of an analysis method capable of quantitatively and automatically diagnosing the sound/vibration level of a fan is required. In this study, it was clarified that the analysis method applying the MT system based on the waveform information of noise and vibration is more effective than the conventional frequency analysis method for the discrimination diagnosis technology of normal and abnormal items. Furthermore, it was found that due to the automation of the vibration waveform analysis system, there was a factor influencing the discrimination accuracy in relation between the fan installation posture and the vibration waveform.
In situ Probe Microphone Measurement for Testing the Direct Acoustical Cochlear Stimulator.
Stieger, Christof; Alnufaily, Yasser H; Candreia, Claudia; Caversaccio, Marco D; Arnold, Andreas M
2017-01-01
Hypothesis: Acoustical measurements can be used for functional control of a direct acoustic cochlear stimulator (DACS). Background: The DACS is a recently released active hearing implant that works on the principle of a conventional piston prosthesis driven by the rod of an electromagnetic actuator. An inherent part of the DACS actuator is a thin titanium diaphragm that allows for movement of the stimulation rod while hermetically sealing the housing. In addition to mechanical stimulation, the actuator emits sound into the mastoid cavity because of the motion of the diaphragm. Methods: We investigated the use of the sound emission of a DACS for intra-operative testing. We measured sound emission in the external auditory canal (P EAC ) and velocity of the actuators stimulation rod (V act ) in five implanted ears of whole-head specimens. We tested the influence various positions of the loudspeaker and a probe microphone on P EAC and simulated implant malfunction in one example. Results: Sound emission of the DACS with a signal-to-noise ratio >10 dB was observed between 0.5 and 5 kHz. Simulated implant misplacement or malfunction could be detected by the absence or shift in the characteristic resonance frequency of the actuator. P EAC changed by <6 dB for variations of the microphone and loudspeaker position. Conclusion: Our data support the feasibility of acoustical measurements for in situ testing of the DACS implant in the mastoid cavity as well as for post-operative monitoring of actuator function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malme, C.I.; Miles, P.R.; Tyack, P.
1985-06-01
An investigation was made of the potential effects of underwater noise from petroleum-industry activities on the behavior of feeding humpback whales in Frederick Sound and Stephens Passage, Alaska in August, 1984. Test sounds were a 100 cu. in. air gun and playbacks of recorded drillship, drilling platform, production platform, semi-submersible drill rig, and helicopter fly-over noise. Sound source levels and acoustic propagation losses were measured. The movement patterns of whales were determined by observations of whale-surfacing positions.
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
NASA Astrophysics Data System (ADS)
Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert
2005-12-01
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Brief report: sound output of infant humidifiers.
Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T
2015-06-01
The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Light-induced vibration in the hearing organ
Ren, Tianying; He, Wenxuan; Li, Yizeng; Grosh, Karl; Fridberger, Anders
2014-01-01
The exceptional sensitivity of mammalian hearing organs is attributed to an active process, where force produced by sensory cells boost sound-induced vibrations, making soft sounds audible. This process is thought to be local, with each section of the hearing organ capable of amplifying sound-evoked movement, and nearly instantaneous, since amplification can work for sounds at frequencies up to 100 kHz in some species. To test these fundamental precepts, we developed a method for focally stimulating the living hearing organ with light. Light pulses caused intense and highly damped mechanical responses followed by traveling waves that developed with considerable delay. The delayed response was identical to movements evoked by click-like sounds. This shows that the active process is neither local nor instantaneous, but requires mechanical waves traveling from the cochlear base toward its apex. A physiologically-based mathematical model shows that such waves engage the active process, enhancing hearing sensitivity. PMID:25087606
Karmakar, Kajari; Narita, Yuichi; Fadok, Jonathan; Ducret, Sebastien; Loche, Alberto; Kitazawa, Taro; Genoud, Christel; Di Meglio, Thomas; Thierry, Raphael; Bacelo, Joao; Lüthi, Andreas; Rijli, Filippo M
2017-01-03
Tonotopy is a hallmark of auditory pathways and provides the basis for sound discrimination. Little is known about the involvement of transcription factors in brainstem cochlear neurons orchestrating the tonotopic precision of pre-synaptic input. We found that in the absence of Hoxa2 and Hoxb2 function in Atoh1-derived glutamatergic bushy cells of the anterior ventral cochlear nucleus, broad input topography and sound transmission were largely preserved. However, fine-scale synaptic refinement and sharpening of isofrequency bands of cochlear neuron activation upon pure tone stimulation were impaired in Hox2 mutants, resulting in defective sound-frequency discrimination in behavioral tests. These results establish a role for Hox factors in tonotopic refinement of connectivity and in ensuring the precision of sound transmission in the mammalian auditory circuit. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Centanni, Tracy M.; Chen, Fuyi; Booker, Anne M.; Engineer, Crystal T.; Sloan, Andrew M.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.
2014-01-01
In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments. PMID:24871331
NASA Technical Reports Server (NTRS)
Holmer, C. I.
1972-01-01
A analytic model of sound transmission into an aircraft cabin was developed as well as test procedures which appropriately rank order properties which affect sound transmission. The proposed model agrees well with available data, and reveals that the pertinent properties of an aircraft cabin for sound transmission include: stiffness of cabin walls at low frequencies (as this reflects on impedance of the walls) and cabin wall transmission loss and interior absorption at mid and high frequencies. Below 315 Hz the foam contributes substantially to wall stiffness and sound transmission loss of typical light aircraft cabin construction, and could potentially reduce cabin noise levels by 3-5 db in this frequency range at a cost of about 0:2 lb/sq. ft. of treated cabin area. The foam was found not to have significant sound absorbing properties.
Probing the critical exponent of the superfluid fraction in a strongly interacting Fermi gas
NASA Astrophysics Data System (ADS)
Hu, Hui; Liu, Xia-Ji
2013-11-01
We theoretically investigate the critical behavior of a second-sound mode in a harmonically trapped ultracold atomic Fermi gas with resonant interactions. Near the superfluid phase transition with critical temperature Tc, the frequency or the sound velocity of the second-sound mode crucially depends on the critical exponent β of the superfluid fraction. In an isotropic harmonic trap, we predict that the mode frequency diverges like (1-T/Tc)β-1/2 when β<1/2. In a highly elongated trap, the speed of the second sound reduces by a factor of 1/2β+1 from that in a homogeneous three-dimensional superfluid. Our prediction could readily be tested by measurements of second-sound wave propagation in a setup, such as that exploited by Sidorenkov [Nature (London)NATUAS0028-083610.1038/nature12136 498, 78 (2013)] for resonantly interacting lithium-6 atoms, once the experimental precision is improved.
Use of acoustics to deter bark beetles from entering tree material.
Aflitto, Nicholas C; Hofstetter, Richard W
2014-12-01
Acoustic technology is a potential tool to protect wood materials and eventually live trees from colonization by bark beetles. Bark beetles such as the southern pine beetle Dendroctonus frontalis, western pine beetle D. brevicomis and pine engraver Ips pini (Coleoptera: Curculionidae) use chemical and acoustic cues to communicate and to locate potential mates and host trees. In this study, the efficacy of sound treatments on D. frontalis, D. brevicomis and I. pini entry into tree materials was tested. Acoustic treatments significantly influenced whether beetles entered pine logs in the laboratory. Playback of artificial sounds reduced D. brevicomis entry into logs, and playback of stress call sounds reduced D. frontalis entry into logs. Sound treatments had no effect on I. pini entry into logs. The reduction in bark beetle entry into logs using particular acoustic treatments indicates that sound could be used as a viable management tool. © 2013 Society of Chemical Industry.
Frequency shifting approach towards textual transcription of heartbeat sounds.
Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan
2011-10-04
Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions
Leech, Robert; Holt, Lori L.; Devlin, Joseph T.; Dick, Frederic
2009-01-01
Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial non-linguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the non-speech sounds predicted the change in pre- to post-training activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. PMID:19386919
Inter-laboratory Comparison of Three Earplug Fit-test Systems
Byrne, David C.; Murphy, William J.; Krieg, Edward F.; Ghent, Robert M.; Michael, Kevin L.; Stefanson, Earl W.; Ahroon, William A.
2017-01-01
The National Institute for Occupational Safety and Health (NIOSH) sponsored tests of three earplug fit-test systems (NIOSH HPD Well-Fit™, Michael & Associates FitCheck, and Honeywell Safety Products VeriPRO®). Each system was compared to laboratory-based real-ear attenuation at threshold (REAT) measurements in a sound field according to ANSI/ASA S12.6-2008 at the NIOSH, Honeywell Safety Products, and Michael & Associates testing laboratories. An identical study was conducted independently at the U.S. Army Aeromedical Research Laboratory (USAARL), which provided their data for inclusion in this report. The Howard Leight Airsoft premolded earplug was tested with twenty subjects at each of the four participating laboratories. The occluded fit of the earplug was maintained during testing with a soundfield-based laboratory REAT system as well as all three headphone-based fit-test systems. The Michael & Associates lab had highest average A-weighted attenuations and smallest standard deviations. The NIOSH lab had the lowest average attenuations and the largest standard deviations. Differences in octave-band attenuations between each fit-test system and the American National Standards Institute (ANSI) sound field method were calculated (Attenfit-test - AttenANSI). A-weighted attenuations measured with FitCheck and HPD Well-Fit systems demonstrated approximately ±2 dB agreement with the ANSI sound field method, but A-weighted attenuations measured with the VeriPRO system underestimated the ANSI laboratory attenuations. For each of the fit-test systems, the average A-weighted attenuation across the four laboratories was not significantly greater than the average of the ANSI sound field method. Standard deviations for residual attenuation differences were about ±2 dB for FitCheck and HPD Well-Fit compared to ±4 dB for VeriPRO. Individual labs exhibited a range of agreement from less than a dB to as much as 9.4 dB difference with ANSI and REAT estimates. Factors such as the experience of study participants and test administrators, and the fit-test psychometric tasks are suggested as possible contributors to the observed results. PMID:27786602
An Experimental Study on the Impact of Different-frequency Elastic Waves on Water Retention Curve
NASA Astrophysics Data System (ADS)
Deng, J. H.; Dai, J. Y.; Lee, J. W.; Lo, W. C.
2017-12-01
ABSTEACTOver the past few decades, theoretical and experimental studies on the connection between elastic wave attributes and the physical properties of a fluid-bearing porous medium have attracted the attention of many scholars in fields of porous medium flow and hydrogeology. It has been previously determined that the transmission of elastic waves in a porous medium containing two immiscible fluids will have an effect on the water retention curve, but it has not been found that the water retention curve will be affected by the frequency of elastic vibration waves or whether the effect on the soil is temporary or permanent. This research is based on a sand box test in which the soil is divided into three layers (a lower, middle, and upper layer). In this case, we discuss different impacts on the water retention curve during the drying process under sound waves (elastic waves) subject to three frequencies (150Hz, 300Hz, and 450Hz), respectively. The change in the water retention curve before and after the effect is then discussed. In addition, how sound waves affect the water retention curve at different depths is also observed. According to the experimental results, we discover that sound waves can cause soil either to expand or to contract. When the soil is induced to expand due to sound waves, it can contract naturally and return to the condition it was in before the influence of the sound waves. On the contrary, when the soil is induced to contract, it is unable to return to its initial condition. Due to the results discussed above, it is suggested that sound waves causing soil to expand have a temporary impact while those causing soil to contract have a permanent impact. In addition, our experimental results show how sound waves affect the water retention curve at different depths. The degree of soil expansion and contraction caused by the sound waves will differ at various soil depths. Nevertheless, the expanding or contracting of soil is only subject to the frequency of sound waves. Key words: Elastic waves, Water retention curve, Sand box test.
Grieco-Calub, Tina M.; Litovsky, Ruth Y.
2010-01-01
Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615
Auditory sequence analysis and phonological skill
Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.
2012-01-01
This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739
Bubbles that Change the Speed of Sound
NASA Astrophysics Data System (ADS)
Planinšič, Gorazd; Etkina, Eugenia
2012-11-01
The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."2 In this paper we describe a simple and robust experiment that allows an easy audio and visual demonstration of the same effect (unfortunately without the chocolate) and offers several possibilities for student investigations. In addition to the demonstration of the above effect, the experiments described below provide an excellent opportunity for students to devise and test explanations with simple equipment.
NASA Astrophysics Data System (ADS)
Orfali, Wasim A.
This article demonstrates the acoustic properties of added small amount of carbon-nanotube and siliconoxide nano powder (S-type, P-Type) to the host material polyurethane composition. By adding CNT and/or nano-silica in the form of powder at different concentrations up to 2% within the PU composition to improve the sound absorption were investigated in the frequency range up to 1600 Hz. Sound transmission loss measurement of the samples were determined using large impedance tube. The tests showed that addition of 0.2 wt.% Silicon Oxide Nano-powder and 0.35 wt.% carbon nanotube to polyurethane composition improved sound transmissions loss (Sound Absorption) up to 80 dB than that of pure polyurethane foam sample.
Digital Sound Encryption with Logistic Map and Number Theoretic Transform
NASA Astrophysics Data System (ADS)
Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT
2018-03-01
Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.
SPAIDE: A Real-time Research Platform for the Clarion CII/90K Cochlear Implant
NASA Astrophysics Data System (ADS)
Van Immerseel, L.; Peeters, S.; Dykmans, P.; Vanpoucke, F.; Bracke, P.
2005-12-01
SPAIDE ( sound-processing algorithm integrated development environment) is a real-time platform of Advanced Bionics Corporation (Sylmar, Calif, USA) to facilitate advanced research on sound-processing and electrical-stimulation strategies with the Clarion CII and 90K implants. The platform is meant for testing in the laboratory. SPAIDE is conceptually based on a clear separation of the sound-processing and stimulation strategies, and, in specific, on the distinction between sound-processing and stimulation channels and electrode contacts. The development environment has a user-friendly interface to specify sound-processing and stimulation strategies, and includes the possibility to simulate the electrical stimulation. SPAIDE allows for real-time sound capturing from file or audio input on PC, sound processing and application of the stimulation strategy, and streaming the results to the implant. The platform is able to cover a broad range of research applications; from noise reduction and mimicking of normal hearing, over complex (simultaneous) stimulation strategies, to psychophysics. The hardware setup consists of a personal computer, an interface board, and a speech processor. The software is both expandable and to a great extent reusable in other applications.
Snyder, Joel S; Weintraub, David M
2013-07-01
An important question is the extent to which declines in memory over time are due to passive loss or active interference from other stimuli. The purpose of the present study was to determine the extent to which implicit memory effects in the perceptual organization of sound sequences are subject to loss and interference. Toward this aim, we took advantage of two recently discovered context effects in the perceptual judgments of sound patterns, one that depends on stimulus features of previous sounds and one that depends on the previous perceptual organization of these sounds. The experiments measured how listeners' perceptual organization of a tone sequence (test) was influenced by the frequency separation, or the perceptual organization, of the two preceding sequences (context1 and context2). The results demonstrated clear evidence for loss of context effects over time but little evidence for interference. However, they also revealed that context effects can be surprisingly persistent. The robust effects of loss, followed by persistence, were similar for the two types of context effects. We discuss whether the same auditory memories might contain information about basic stimulus features of sounds (i.e., frequency separation), as well as the perceptual organization of these sounds.
Memory for pictures and sounds: independence of auditory and visual codes.
Thompson, V A; Paivio, A
1994-09-01
Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.
Air-borne and tissue-borne sensitivities of bioacoustic sensors used on the skin surface.
Zañartu, Matías; Ho, Julio C; Kraman, Steve S; Pasterkamp, Hans; Huber, Jessica E; Wodicka, George R
2009-02-01
Measurements of body sounds on the skin surface have been widely used in the medical field and continue to be a topic of current research, ranging from the diagnosis of respiratory and cardiovascular diseases to the monitoring of voice dosimetry. These measurements are typically made using light-weight accelerometers and/or air-coupled microphones attached to the skin. Although normally neglected, air-borne sounds generated by the subject or other sources of background noise can easily corrupt such recordings, which is particularly critical in the recording of voiced sounds on the skin surface. In this study, the sensitivity of commonly used bioacoustic sensors to air-borne sounds was evaluated and compared with their sensitivity to tissue-borne body sounds. To delineate the sensitivity to each pathway, the sensors were first tested in vitro and then on human subjects. The results indicated that, in general, the air-borne sensitivity is sufficiently high to significantly corrupt body sound signals. In addition, the air-borne and tissue-borne sensitivities can be used to discriminate between these components. Although the study is focused on the evaluation of voiced sounds on the skin surface, an extension of the proposed methods to other bioacoustic applications is discussed.
Development of Prototype of Whistling Sound Counter based on Piezoelectric Bone Conduction
NASA Astrophysics Data System (ADS)
Mori, Mikio; Ogihara, Mitsuhiro; Kyuu, Ten; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro
Recently, some professional whistlers have set up music schools that teach musical whistling. Similar to singing, in musical whistling, the whistling sound should not be break, even when the whistling goes on for more than 3 min. For this, it is advisable to practice whistling the “Pii” sound, which involves whistling the “Pii” sound continuously 100 times with the same pitch. However, when practicing alone, a whistler finds it difficult to count his/her own whistling sounds. In this paper, we propose a whistling sound counter based on piezoelectric bone conduction. This system consists of five parts. The gain of the amplifier section of this counter is variable, and the center frequency (f0) of the BPF part is also variable. In this study, we developed a prototype of the system and tested it. For this, we simultaneously counted the whistling sounds of nine people using the proposed system. The proposed system showed a good performance in a noisy environment. We also propose an examination system for awarding grades in musical whistling, which enforces the license examination in musical whistling on the personal computer. The proposed system can be used to administer the 5th grade exam for musical whistling.
Chung, Rick
2012-06-01
Patient empowerment has increased the demand for direct to consumer (DTC) laboratory testing. Multiple professional societies and advocacy groups have raised concerns over how DTC laboratory testing is being offered to consumers without proper physician oversight. Physician telehealth services can properly oversee DTC laboratory testing in a safe and medically sound manner. Using telehealth protocols and standards established by professional health organizations and state regulators, physician telehealth oversight in DTC laboratory test ordering can be effective to increase patient access to healthcare services. With proper physician oversight in test interpretation, post-test counseling, and information collaboration, DTC laboratory testing can remain a reliable and convenient option for consumers. Working within the channel of distribution of most DTC laboratory testing, physician telehealth services can properly oversee DTC laboratory testing in a safe and medically sound manner to ensure that proper test interpretation, counseling, and information collaboration are achieved. Physician telehealth services can properly oversee DTC laboratory testing to ensure that proper test interpretation, counseling, and information collaboration are achieved.
Infrared Imagery of Solid Rocket Exhaust Plumes
NASA Technical Reports Server (NTRS)
Moran, Robert P.; Houston, Janice D.
2011-01-01
The Ares I Scale Model Acoustic Test program consisted of a series of 18 solid rocket motor static firings, simulating the liftoff conditions of the Ares I five-segment Reusable Solid Rocket Motor Vehicle. Primary test objectives included acquiring acoustic and pressure data which will be used to validate analytical models for the prediction of Ares 1 liftoff acoustics and ignition overpressure environments. The test article consisted of a 5% scale Ares I vehicle and launch tower mounted on the Mobile Launch Pad. The testing also incorporated several Water Sound Suppression Systems. Infrared imagery was employed during the solid rocket testing to support the validation or improvement of analytical models, and identify corollaries between rocket plume size or shape and the accompanying measured level of noise suppression obtained by water sound suppression systems.
Recent Enhancements to the NASA Langley Structural Acoustics Loads and Transmission (SALT) Facility
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Cabell, Randolph H.; Allen, Albert R.
2013-01-01
The Structural Acoustics Loads and Transmission (SALT) facility at the NASA Langley Research Center is comprised of an anechoic room and a reverberant room, and may act as a transmission loss suite when test articles are mounted in a window connecting the two rooms. In the latter configuration, the reverberant room acts as the noise source side and the anechoic room as the receiver side. The noise generation system used for qualification testing in the reverberant room was previously shown to achieve a maximum overall sound pressure level of 141 dB. This is considered to be marginally adequate for generating sound pressure levels typically required for launch vehicle payload qualification testing. Recent enhancements to the noise generation system increased the maximum overall sound pressure level to 154 dB, through the use of two airstream modulators coupled to 35 Hz and 160 Hz horns. This paper documents the acoustic performance of the enhanced noise generation system for a variety of relevant test spectra. Additionally, it demonstrates the capability of the SALT facility to conduct transmission loss and absorption testing in accordance with ASTM and ISO standards, respectively. A few examples of test capabilities are shown and include transmission loss testing of simple unstiffened and built up structures and measurement of the diffuse field absorption coefficient of a fibrous acoustic blanket.
NASA Technical Reports Server (NTRS)
Luu, D.
1999-01-01
This is the Performance Verification Report, AMSU-A1 Antenna Drive Subsystem, P/N 1331720-2, S/N 106, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). The antenna drive subsystem of the METSAT AMSU-A1, S/N 106, P/N 1331720-2, completed acceptance testing per A-ES Test Procedure AE-26002/lD. The test included: Scan Motion and Jitter, Pulse Load Bus Peak Current and Rise Time, Resolver Reading and Position Error, Gain/ Phase Margin, and Operational Gain Margin. The drive motors and electronic circuitry were also tested at the component level. The drive motor test includes: Starting Torque Test, Motor Commutation Test, Resolver Operation/ No-Load Speed Test, and Random Vibration. The electronic circuitry was tested at the Circuit Card Assembly (CCA) level of production; each test exercised all circuit functions. The transistor assembly was tested during the W3 cable assembly (1356941-1) test.
Snoring classified: The Munich-Passau Snore Sound Corpus.
Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn
2018-03-01
Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of a tele-stethoscope and its application in pediatric cardiology.
Hedayioglu, F L; Mattos, S S; Moser, L; de Lima, M E
2007-01-01
Over the years, many attempts have been made to develop special stethoscopes for the teaching of auscultation. The objective of this article is to report on the experience with the development and implementation of an electronic stethoscope and a virtual library of cardiac sounds. There were four stages to this project: (1) the building of the prototype to acquire, filter and amplify the cardiac sounds, (2) the development of a software program to record, reproduce and visualize them, (3) the testing of the prototype in a clinical scenario, and (4) the development of an internet site, to store and display the sounds collected. The first two stages are now complete. The prototype underwent an initial evaluation in a clinical scenario within the Unit and during virtual out-patient clinical sessions. One hundred auscultations were recorded during these tests. They were reviewed and discussed on-line by a panel of experience cardiologists during the sessions. Although the sounds were considered "satisfactory" for diagnostic purposes by the cardiology team, they identified some qualitative differences in the electronic recorded auscultations, such as a higher pitch of the recorded sounds. Prospective clinical studies are now being conducted to further evaluate the interference of the electronic device in the physicians' capability to diagnose different cardiac conditions. An internet site (www.caduceusvirtual.com.br/ auscultaped) was developed to host these cardiac auscultations. It is set as a library of cardiac sounds, catalogued by pathologies and already contains examples from auscultations of the majority of common congenital heart lesions, such as septal defects and valvar lesions.
NASA Technical Reports Server (NTRS)
Gelder, T. F.; Soltis, R. F.
1975-01-01
Narrowband analysis revealed grossly similar sound pressure level spectra in each facility. Blade passing frequency (BPF) noise and multiple pure tone (MPT) noise were superimposed on a broadband (BB) base noise. From one-third octave bandwidth sound power analyses the BPF noise (harmonics combined), and the MPT noise (harmonics combined, excepting BPF's) agreed between facilities within 1.5 db or less over the range of speeds and flows tested. Detailed noise and aerodynamic performance is also presented.
James Webb Space Telescope's ISIM Passes Severe-Sound Test
2017-12-08
The ISIM structure wrapped up and waiting for sound testing in the acoustics chamber at NASA Goddard. Credits: NASA/Desiree Stover Read more: 1.usa.gov/1KvoY4p NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Dorman, Michael F; Natale, Sarah; Loiselle, Louise
2018-03-01
Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology
Are minidisc recorders adequate for the study of respiratory sounds?
Kraman, Steve S; Wodicka, George R; Kiyokawa, Hiroshi; Pasterkamp, Hans
2002-01-01
Digital audio tape (DAT) recorders have become the de facto gold standard recording devices for lung sounds. Sound recorded on DAT is compact-disk (CD) quality with adequate sensitivity from below 20 Hz to above 20 KHz. However, DAT recorders have drawbacks. Although small, they are relatively heavy, the recording mechanism is complex and delicate, and finding one desired track out of many is inconvenient. A more recent development in portable recording devices is the minidisc (MD) recorder. These recorders are widely available, inexpensive, small and light, rugged, mechanically simple, and record digital data in tracks that may be named and accessed directly. Minidiscs hold as much recorded sound as a compact disk but in about 1/5 of the recordable area. The data compression is achieved by use of a technique known as adaptive transform acoustic coding for minidisc (ATRAC). This coding technique makes decisions about what components of the sound would not be heard by a human listener and discards the digital information that represents these sounds. Most of this compression takes place on sounds above 5.5 KHz. As the intended use of these recorders is the storage and reproduction of music, it is unknown whether ATRAC will discard or distort significant portions of typical lung sound signals. We determined the suitability of MD recorders for respiratory sound research by comparing a variety of normal and pathologic lung sounds that were digitized directly into a computer and also after recording by a DAT recorder and 2 different MD recorders (Sharp and Sony). We found that the frequency spectra and waveforms of respiratory sounds were not distorted in any important way by recording on the two MD recorders tested.
Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance
ERIC Educational Resources Information Center
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-01-01
Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…
NASA Test Flights Examine Effect of Atmospheric Turbulence on Sonic Booms
2016-07-20
One of three microphone arrays positioned strategically along the ground at Edwards Air Force Base, California, sits ready to collect sound signatures from sonic booms created by a NASA F/A-18 during the SonicBAT flight series. The arrays collected the sound signatures of booms that had traveled through atmospheric turbulence before reaching the ground.
ERIC Educational Resources Information Center
Marks, William J.; Jones, W. Paul; Loe, Scott A.
2013-01-01
This study investigated the use of compressed speech as a modality for assessment of the simultaneous processing function for participants with visual impairment. A 24-item compressed speech test was created using a sound editing program to randomly remove sound elements from aural stimuli, holding pitch constant, with the objective to emulate the…
The Perception of Second Language Sounds in Early Bilinguals: New Evidence from an Implicit Measure
ERIC Educational Resources Information Center
Navarra, Jordi; Sebastian-Galles, Nuria; Soto-Faraco, Salvador
2005-01-01
Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish).…
Global Ocean Forecast System V3.0 Validation Test Report Addendum: Addition of the Diurnal Cycle
2010-11-05
upper ocean forming a thin mixed layer and have a profound impact on the sound speed profile and surface duct (e.g. Urick , 1983). When the solar...7320--10-9236. Urick , R.J., 1983: Principles of underwater sound, 3 rd Edition. Peninsula Publishing, Los Altos, California, 423 pp. 11 7.0
ERIC Educational Resources Information Center
Lee, Kwangyhuyn; Weimer, Debbi
2002-01-01
Michigan is designing a new accountability system that combines high standards and statewide testing within a school accreditation framework. Sound assessment techniques are critical if the accountability system is to provide relevant information to schools and policymakers. One important component of a sound assessment system is measurement of…
ERIC Educational Resources Information Center
Peter, Beate
2012-01-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…
Supersonic Retropropulsion Flight Test Concepts
NASA Technical Reports Server (NTRS)
Post, Ethan A.; Dupzyk, Ian C.; Korzun, Ashley M.; Dyakonov, Artem A.; Tanimoto, Rebekah L.; Edquist, Karl T.
2011-01-01
NASA's Exploration Technology Development and Demonstration Program has proposed plans for a series of three sub-scale flight tests at Earth for supersonic retropropulsion, a candidate decelerator technology for future, high-mass Mars missions. The first flight test in this series is intended to be a proof-of-concept test, demonstrating successful initiation and operation of supersonic retropropulsion at conditions that replicate the relevant physics of the aerodynamic-propulsive interactions expected in flight. Five sub-scale flight test article concepts, each designed for launch on sounding rockets, have been developed in consideration of this proof-of-concept flight test. Commercial, off-the-shelf components are utilized as much as possible in each concept. The design merits of the concepts are compared along with their predicted performance for a baseline trajectory. The results of a packaging study and performance-based trade studies indicate that a sounding rocket is a viable launch platform for this proof-of-concept test of supersonic retropropulsion.
Choi, Yura; Park, Jeong-Eun; Jeong, Jong Seob; Park, Jung-Keug; Kim, Jongpil; Jeon, Songhee
2016-10-01
Mesenchymal stem cells (MSCs) have shown considerable promise as an adaptable cell source for use in tissue engineering and other therapeutic applications. The aims of this study were to develop methods to test the hypothesis that human MSCs could be differentiated using sound wave stimulation alone and to find the underlying mechanism. Human bone marrow (hBM)-MSCs were stimulated with sound waves (1 kHz, 81 dB) for 7 days and the expression of neural markers were analyzed. Sound waves induced neural differentiation of hBM-MSC at 1 kHz and 81 dB but not at 1 kHz and 100 dB. To determine the signaling pathways involved in the neural differentiation of hBM-MSCs by sound wave stimulation, we examined the Pyk2 and CREB phosphorylation. Sound wave induced an increase in the phosphorylation of Pyk2 and CREB at 45 min and 90 min, respectively, in hBM-MSCs. To find out the upstream activator of Pyk2, we examined the intracellular calcium source that was released by sound wave stimulation. When we used ryanodine as a ryanodine receptor antagonist, sound wave-induced calcium release was suppressed. Moreover, pre-treatment with a Pyk2 inhibitor, PF431396, prevented the phosphorylation of Pyk2 and suppressed sound wave-induced neural differentiation in hBM-MSCs. These results suggest that specific sound wave stimulation could be used as a neural differentiation inducer of hBM-MSCs.
NASA Astrophysics Data System (ADS)
Alkilani, Amjad; Shirkhodaie, Amir
2013-05-01
Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.
Physics of thermo-acoustic sound generation
NASA Astrophysics Data System (ADS)
Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.
2013-09-01
We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.
Prototype electronic stethoscope vs. conventional stethoscope for auscultation of heart sounds.
Kelmenson, Daniel A; Heath, Janae K; Ball, Stephanie A; Kaafarani, Haytham M A; Baker, Elisabeth M; Yeh, Daniel D; Bittner, Edward A; Eikermann, Matthias; Lee, Jarone
2014-08-01
In an effort to decrease the spread of hospital-acquired infections, many hospitals currently use disposable plastic stethoscopes in patient rooms. As an alternative, this study examines a prototype electronic stethoscope that does not break the isolation barrier between clinician and patient and may also improve the diagnostic accuracy of the stethoscope exam. This study aimed to investigate whether the new prototype electronic stethoscope improved auscultation of heart sounds compared to the standard conventional isolation stethoscope. In a controlled, non-blinded, cross-over study, clinicians were randomized to identify heart sounds with both the prototype electronic stethoscope and a conventional stethoscope. The primary outcome was the score on a 10-question heart sound identification test. In total, 41 clinicians completed the study. Subjects performed significantly better in the identification of heart sounds when using the prototype electronic stethoscope (median = 9 [7-10] vs. 8 [6-9] points, p value <0.0001). Subjects also significantly preferred the prototype electronic stethoscope. Clinicians using a new prototype electronic stethoscope achieved greater accuracy in identification of heart sounds and also universally favoured the new device, compared to the conventional stethoscope.
Pulse-echo sound speed estimation using second order speckle statistics
NASA Astrophysics Data System (ADS)
Rosado-Mendez, Ivan M.; Nam, Kibo; Madsen, Ernest L.; Hall, Timothy J.; Zagzebski, James A.
2012-10-01
This work presents a phantom-based evaluation of a method for estimating soft-tissue speeds of sound using pulse-echo data. The method is based on the improvement of image sharpness as the sound speed value assumed during beamforming is systematically matched to the tissue sound speed. The novelty of this work is the quantitative assessment of image sharpness by measuring the resolution cell size from the autocovariance matrix for echo signals from a random distribution of scatterers thus eliminating the need of strong reflectors. Envelope data were obtained from a fatty-tissue mimicking (FTM) phantom (sound speed = 1452 m/s) and a nonfatty-tissue mimicking (NFTM) phantom (1544 m/s) scanned with a linear array transducer on a clinical ultrasound system. Dependence on pulse characteristics was tested by varying the pulse frequency and amplitude. On average, sound speed estimation errors were -0.7% for the FTM phantom and -1.1% for the NFTM phantom. In general, no significant difference was found among errors from different pulse frequencies and amplitudes. The method is currently being optimized for the differentiation of diffuse liver diseases.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
Basic experimental study of the coupling between flow instabilities and incident sound
NASA Astrophysics Data System (ADS)
Ahuja, K. K.
1984-03-01
Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.
Optical measurement of sound using time-varying laser speckle patterns
NASA Astrophysics Data System (ADS)
Leung, Terence S.; Jiang, Shihong; Hebden, Jeremy
2011-02-01
In this work, we introduce an optical technique to measure sound. The technique involves pointing a coherent pulsed laser beam on the surface of the measurement site and capturing the time-varying speckle patterns using a CCD camera. Sound manifests itself as vibrations on the surface which induce a periodic translation of the speckle pattern over time. Using a parallel speckle detection scheme, the dynamics of the time-varying speckle patterns can be captured and processed to produce spectral information of the sound. One potential clinical application is to measure pathological sounds from the brain as a screening test. We performed experiments to demonstrate the principle of the detection scheme using head phantoms. The results show that the detection scheme can measure the spectra of single frequency sounds between 100 and 2000 Hz. The detection scheme worked equally well in both a flat geometry and an anatomical head geometry. However, the current detection scheme is too slow for use in living biological tissues which has a decorrelation time of a few milliseconds. Further improvements have been suggested.
An approach for automatic classification of grouper vocalizations with passive acoustic monitoring.
Ibrahim, Ali K; Chérubin, Laurent M; Zhuang, Hanqi; Schärer Umpierre, Michelle T; Dalgleish, Fraser; Erdol, Nurgun; Ouyang, B; Dalgleish, A
2018-02-01
Grouper, a family of marine fishes, produce distinct vocalizations associated with their reproductive behavior during spawning aggregation. These low frequencies sounds (50-350 Hz) consist of a series of pulses repeated at a variable rate. In this paper, an approach is presented for automatic classification of grouper vocalizations from ambient sounds recorded in situ with fixed hydrophones based on weighted features and sparse classifier. Group sounds were labeled initially by humans for training and testing various feature extraction and classification methods. In the feature extraction phase, four types of features were used to extract features of sounds produced by groupers. Once the sound features were extracted, three types of representative classifiers were applied to categorize the species that produced these sounds. Experimental results showed that the overall percentage of identification using the best combination of the selected feature extractor weighted mel frequency cepstral coefficients and sparse classifier achieved 82.7% accuracy. The proposed algorithm has been implemented in an autonomous platform (wave glider) for real-time detection and classification of group vocalizations.
Basic experimental study of the coupling between flow instabilities and incident sound
NASA Technical Reports Server (NTRS)
Ahuja, K. K.
1984-01-01
Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.
How Sound Symbolism Is Processed in the Brain: A Study on Japanese Mimetic Words
Okuda, Jiro; Okada, Hiroyuki; Matsuda, Tetsuya
2014-01-01
Sound symbolism is the systematic and non-arbitrary link between word and meaning. Although a number of behavioral studies demonstrate that both children and adults are universally sensitive to sound symbolism in mimetic words, the neural mechanisms underlying this phenomenon have not yet been extensively investigated. The present study used functional magnetic resonance imaging to investigate how Japanese mimetic words are processed in the brain. In Experiment 1, we compared processing for motion mimetic words with that for non-sound symbolic motion verbs and adverbs. Mimetic words uniquely activated the right posterior superior temporal sulcus (STS). In Experiment 2, we further examined the generalizability of the findings from Experiment 1 by testing another domain: shape mimetics. Our results show that the right posterior STS was active when subjects processed both motion and shape mimetic words, thus suggesting that this area may be the primary structure for processing sound symbolism. Increased activity in the right posterior STS may also reflect how sound symbolic words function as both linguistic and non-linguistic iconic symbols. PMID:24840874
Restoration of spatial hearing in adult cochlear implant users with single-sided deafness.
Litovsky, Ruth Y; Moua, Keng; Godar, Shelly; Kan, Alan; Misurelli, Sara M; Lee, Daniel J
2018-04-14
In recent years, cochlear implants (CIs) have been provided in growing numbers to people with not only bilateral deafness but also to people with unilateral hearing loss, at times in order to alleviate tinnitus. This study presents audiological data from 15 adult participants (ages 48 ± 12 years) with single sided deafness. Results are presented from 9/15 adults, who received a CI (SSD-CI) in the deaf ear and were tested in Acoustic or Acoustic + CI hearing modes, and 6/15 adults who are planning to receive a CI, and were tested in the unilateral condition only. Testing included (1) audiometric measures of threshold, (2) speech understanding for CNC words and AzBIO sentences, (3) tinnitus handicap inventory, (4) sound localization with stationary sound sources, and (5) perceived auditory motion. Results showed that when listening to sentences in quiet, performance was excellent in the Acoustic and Acoustic + CI conditions. In noise, performance was similar between Acoustic and Acoustic + CI conditions in 4/6 participants tested, and slightly worse in the Acoustic + CI in 2/6 participants. In some cases, the CI provided reduced tinnitus handicap scores. When testing sound localization ability, the Acoustic + CI condition resulted in improved sound localization RMS error of 29.2° (SD: ±6.7°) compared to 56.6° (SD: ±16.5°) in the Acoustic-only condition. Preliminary results suggest that the perception of motion direction, whereby subjects are required to process and compare directional cues across multiple locations, is impaired when compared with that of normal hearing subjects. Copyright © 2018 Elsevier B.V. All rights reserved.
Neonatal incubators: a toxic sound environment for the preterm infant?*.
Marik, Paul E; Fuller, Christopher; Levitov, Alexander; Moll, Elizabeth
2012-11-01
High sound pressure levels may be harmful to the maturing newborn. Current guidelines suggest that the sound pressure levels within a neonatal intensive care unit should not exceed 45 dB(A). It is likely that environmental noise as well as the noise generated by the incubator fan and respiratory equipment may contribute to the total sound pressure levels. Knowledge of the contribution of each component and source is important to develop effective strategies to reduce noise within the incubator. The objectives of this study were to determine the sound levels, sound spectra, and major sources of sound within a modern neonatal incubator (Giraffe Omnibed; GE Healthcare, Helsinki, Finland) using a sound simulation study to replicate the conditions of a preterm infant undergoing high-frequency jet ventilation (Life Pulse, Bunnell, UT). Using advanced sound data acquisition and signal processing equipment, we measured and analyzed the sound level at a dummy infant's ear and at the head level outside the enclosure. The sound data time histories were digitally acquired and processed using a digital Fast Fourier Transform algorithm to provide spectra of the sound and cumulative sound pressure levels (dBA). The simulation was done with the incubator cooling fan and ventilator switched on or off. In addition, tests were carried out with the enclosure sides closed and hood down and then with the enclosure sides open and the hood up to determine the importance of interior incubator reverberance on the interior sound levels With all the equipment off and the hood down, the sound pressure levels were 53 dB(A) inside the incubator. The sound pressure levels increased to 68 dB(A) with all equipment switched on (approximately 10 times louder than recommended). The sound intensity was 6.0 × 10(-8) watts/m(2); this sound level is roughly comparable with that generated by a kitchen exhaust fan on high. Turning the ventilator off reduced the overall sound pressure levels to 64 dB(A) and the sound pressure levels in the low-frequency band of 0 to 100 Hz were reduced by 10 dB(A). The incubator fan generated tones at 200, 400, and 600 Hz that raised the sound level by approximately 2 dB(A)-3 dB(A). Opening the enclosure (with all equipment turned on) reduced the sound levels above 50 Hz by reducing the revereberance within the enclosure. The sound levels, especially at low frequencies, within a modern incubator may reach levels that are likely to be harmful to the developing newborn. Much of the noise is at low frequencies and thus difficult to reduce by conventional means. Therefore, advanced forms of noise control are needed to address this issue.
Evaluation of noise pollution level in the operating rooms of hospitals: A study in Iran.
Giv, Masoumeh Dorri; Sani, Karim Ghazikhanlou; Alizadeh, Majid; Valinejadi, Ali; Majdabadi, Hesamedin Askari
2017-06-01
Noise pollution in the operating rooms is one of the remaining challenges. Both patients and physicians are exposed to different sound levels during the operative cases, many of which can last for hours. This study aims to evaluate the noise pollution in the operating rooms during different surgical procedures. In this cross-sectional study, sound level in the operating rooms of Hamadan University-affiliated hospitals (totally 10) in Iran during different surgical procedures was measured using B&K sound meter. The gathered data were compared with national and international standards. Statistical analysis was performed using descriptive statistics and one-way ANOVA, t -test, and Pearson's correlation test. Noise pollution level at majority of surgical procedures is higher than national and international documented standards. The highest level of noise pollution is related to orthopedic procedures, and the lowest one related to laparoscopic and heart surgery procedures. The highest and lowest registered sound level during the operation was 93 and 55 dB, respectively. Sound level generated by equipments (69 ± 4.1 dB), trolley movement (66 ± 2.3 dB), and personnel conversations (64 ± 3.9 dB) are the main sources of noise. The noise pollution of operating rooms are higher than available standards. The procedure needs to be corrected for achieving the proper conditions.
Cortical activity patterns predict robust speech discrimination ability in noise
Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.
2012-01-01
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331
What is the link between synaesthesia and sound symbolism?
Bankieris, Kaitlyn; Simner, Julia
2015-01-01
Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia. PMID:25498744
Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.
Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael
2014-04-01
The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.
Testing Accuracy of Long-Range Ultrasonic Sensors for Olive Tree Canopy Measurements
Gamarra-Diezma, Juan Luis; Miranda-Fuentes, Antonio; Llorens, Jordi; Cuenca, Andrés; Blanco-Roldán, Gregorio L.; Rodríguez-Lizana, Antonio
2015-01-01
Ultrasonic sensors are often used to adjust spray volume by allowing the calculation of the crown volume of tree crops. The special conditions of the olive tree require the use of long-range sensors, which are less accurate and faster than the most commonly used sensors. The main objectives of the study were to determine the suitability of the sensor in terms of sound cone determination, angle errors, crosstalk errors and field measurements. Different laboratory tests were performed to check the suitability of a commercial long-range ultrasonic sensor, as were the experimental determination of the sound cone diameter at several distances for several target materials, the determination of the influence of the angle of incidence of the sound wave on the target and distance on the accuracy of measurements for several materials and the determination of the importance of the errors due to interference between sensors for different sensor spacings and distances for two different materials. Furthermore, sensor accuracy was tested under real field conditions. The results show that the studied sensor is appropriate for olive trees because the sound cone is narrower for an olive tree than for the other studied materials, the olive tree canopy does not have a large influence on the sensor accuracy with respect to distance and angle, the interference errors are insignificant for high sensor spacings and the sensor's field distance measurements were deemed sufficiently accurate. PMID:25635414
[Acoustic conditions in open plan offices - Pilot test results].
Mikulski, Witold
The main source of noise in open plan office are conversations. Office work standards in such premises are attained by applying specific acoustic adaptation. This article presents the results of pilot tests and acoustic evaluation of open space rooms. Acoustic properties of 6 open plan office rooms were the subject of the tests. Evaluation parameters, measurement methods and criterial values were adopted according to the following standards: PN-EN ISO 3382- 3:2012, PN-EN ISO 3382-2:2010, PN-B-02151-4:2015-06 and PN-B-02151-3:2015-10. The reverberation time was 0.33- 0.55 s (maximum permissible value in offices - 0.6 s; the criterion was met), sound absorption coefficient in relation to 1 m2 of the room's plan was 0.77-1.58 m2 (minimum permissible value - 1.1 m2; 2 out of 6 rooms met the criterion), distraction distance was 8.5-14 m (maximum permissible value - 5 m; none of the rooms met the criterion), A-weighted sound pressure level of speech at a distance of 4 m was 43.8-54.7 dB (maximum permissible value - 48 dB; 2 out of 6 rooms met the criterion), spatial decay rate of the speech was 1.8-6.3 dB (minimum permissible value - 7 dB; none of the rooms met the criterion). Standard acoustic treatment, containing sound absorbing suspended ceiling, sound absorbing materials on the walls, carpet flooring and sound absorbing workplace barriers, is not sufficient. These rooms require specific advanced acoustic solutions. Med Pr 2016;67(5):653-662. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Yoshida, Yuya; Suganuma, Takeshi; Takaba, Masayuki; Ono, Yasuhiro; Abe, Yuka; Yoshizawa, Shuichiro; Sakai, Takuro; Yoshizawa, Ayako; Nakamura, Hirotaka; Kawana, Fusae; Baba, Kazuyoshi
2017-08-01
The aim of this study was to investigate the association between patterns of jaw motor activity during sleep and clinical signs and symptoms of sleep bruxism. A total of 35 university students and staff members participated in this study after providing informed consent. All participants were divided into either a sleep bruxism group (n = 21) or a control group (n = 14), based on the following clinical diagnostic criteria: (1) reports of tooth-grinding sounds for at least two nights a week during the preceding 6 months by their sleep partner; (2) presence of tooth attrition with exposed dentin; (3) reports of morning masticatory muscle fatigue or tenderness; and (4) presence of masseter muscle hypertrophy. Video-polysomnography was performed in the sleep laboratory for two nights. Sleep bruxism episodes were measured using masseter electromyography, visually inspected and then categorized into phasic or tonic episodes. Phasic episodes were categorized further into episodes with or without grinding sounds as evaluated by audio signals. Sleep bruxism subjects with reported grinding sounds had a significantly higher total number of phasic episodes with grinding sounds than subjects without reported grinding sounds or controls (Kruskal-Wallis/Steel-Dwass tests; P < 0.05). Similarly, sleep bruxism subjects with tooth attrition exhibited significantly longer phasic burst durations than those without or controls (Kruskal-Wallis/Steel-Dwass tests; P < 0.05). Furthermore, sleep bruxism subjects with morning masticatory muscle fatigue or tenderness exhibited significantly longer tonic burst durations than those without or controls (Kruskal-Wallis/Steel-Dwass tests; P < 0.05). These results suggest that each clinical sign and symptom of sleep bruxism represents different aspects of jaw motor activity during sleep. © 2016 European Sleep Research Society.
Are consonant intervals music to their ears? Spontaneous acoustic preferences in a nonhuman primate.
McDermott, Josh; Hauser, Marc
2004-12-01
Humans find some sounds more pleasing than others; such preferences may underlie our enjoyment of music. To gain insight into the evolutionary origins of these preferences, we explored whether they are present in other animals. We designed a novel method to measure the spontaneous sound preferences of cotton-top tamarins, a species that has been extensively tested for other perceptual abilities. Animals were placed in a V-shaped maze, and their position within the maze controlled their auditory environment. One sound was played when they were in one branch of the maze, and a different sound for the opposite branch; no food was delivered during testing. We used the proportion of time spent in each branch as a measure of preference. The first two experiments were designed as tests of our method. In Experiment 1, we used loud and soft white noise as stimuli; all animals spent most of their time on the side with soft noise. In Experiment 2, tamarins spent more time on the side playing species-specific feeding chirps than on the side playing species-specific distress calls. Together, these two experiments suggest that the method is effective, providing a spontaneous measure of preference. In Experiment 3, however, subjects showed no preference for consonant over dissonant intervals. Finally, tamarins showed no preference in Experiment 4 for a screeching sound (comparable to fingernails on a blackboard) over amplitude-matched white noise. In contrast, humans showed clear preferences for the consonant intervals of Experiment 3 and the white noise of Experiment 4 using the same stimuli and a similar method. We conclude that tamarins' preferences differ qualitatively from those of humans. The preferences that support our capacity for music may, therefore, be unique among the primates, and could be music-specific adaptations.
Durai, Mithila; Searchfield, Grant D
2017-01-01
Objectives: A randomized cross-over trial in 18 participants tested the hypothesis that nature sounds, with unpredictable temporal characteristics and high valence would yield greater improvement in tinnitus than constant, emotionally neutral broadband noise. Study Design: The primary outcome measure was the Tinnitus Functional Index (TFI). Secondary measures were: loudness and annoyance ratings, loudness level matches, minimum masking levels, positive and negative emotionality, attention reaction and discrimination time, anxiety, depression and stress. Each sound was administered using MP3 players with earbuds for 8 continuous weeks, with a 3 week wash-out period before crossing over to the other treatment sound. Measurements were undertaken for each arm at sound fitting, 4 and 8 weeks after administration. Qualitative interviews were conducted at each of these appointments. Results: From a baseline TFI score of 41.3, sound therapy resulted in TFI scores at 8 weeks of 35.6; broadband noise resulted in significantly greater reduction (8.2 points) after 8 weeks of sound therapy use than nature sounds (3.2 points). The positive effect of sound on tinnitus was supported by secondary outcome measures of tinnitus, emotion, attention, and psychological state, but not interviews. Tinnitus loudness level match was higher for BBN at 8 weeks; while there was little change in loudness level matches for nature sounds. There was no change in minimum masking levels following sound therapy administration. Self-reported preference for one sound over another did not correlate with changes in tinnitus. Conclusions: Modeled under an adaptation level theory framework of tinnitus perception, the results indicate that the introduction of broadband noise shifts internal adaptation level weighting away from the tinnitus signal, reducing tinnitus magnitude. Nature sounds may modify the affective components of tinnitus via a secondary, residual pathway, but this appears to be less important for sound effectiveness. The different rates of adaptation to broadband noise and nature sound by the auditory system may explain the different tinnitus loudness level matches. In addition to group effects there also appears to be a great deal of individual variation. A sound therapy framework based on adaptation level theory is proposed that accounts for individual variation in preference and response to sound. Clinical Trial Registration: www.anzctr.org.au, identifier #12616000742471.
Durai, Mithila; Searchfield, Grant D.
2017-01-01
Objectives: A randomized cross-over trial in 18 participants tested the hypothesis that nature sounds, with unpredictable temporal characteristics and high valence would yield greater improvement in tinnitus than constant, emotionally neutral broadband noise. Study Design: The primary outcome measure was the Tinnitus Functional Index (TFI). Secondary measures were: loudness and annoyance ratings, loudness level matches, minimum masking levels, positive and negative emotionality, attention reaction and discrimination time, anxiety, depression and stress. Each sound was administered using MP3 players with earbuds for 8 continuous weeks, with a 3 week wash-out period before crossing over to the other treatment sound. Measurements were undertaken for each arm at sound fitting, 4 and 8 weeks after administration. Qualitative interviews were conducted at each of these appointments. Results: From a baseline TFI score of 41.3, sound therapy resulted in TFI scores at 8 weeks of 35.6; broadband noise resulted in significantly greater reduction (8.2 points) after 8 weeks of sound therapy use than nature sounds (3.2 points). The positive effect of sound on tinnitus was supported by secondary outcome measures of tinnitus, emotion, attention, and psychological state, but not interviews. Tinnitus loudness level match was higher for BBN at 8 weeks; while there was little change in loudness level matches for nature sounds. There was no change in minimum masking levels following sound therapy administration. Self-reported preference for one sound over another did not correlate with changes in tinnitus. Conclusions: Modeled under an adaptation level theory framework of tinnitus perception, the results indicate that the introduction of broadband noise shifts internal adaptation level weighting away from the tinnitus signal, reducing tinnitus magnitude. Nature sounds may modify the affective components of tinnitus via a secondary, residual pathway, but this appears to be less important for sound effectiveness. The different rates of adaptation to broadband noise and nature sound by the auditory system may explain the different tinnitus loudness level matches. In addition to group effects there also appears to be a great deal of individual variation. A sound therapy framework based on adaptation level theory is proposed that accounts for individual variation in preference and response to sound. Clinical Trial Registration: www.anzctr.org.au, identifier #12616000742471. PMID:28337139
Acoustic deterrence of bighead carp (Hypophthalmichthys nobilis) to a broadband sound stimulus
Vetter, Brooke J.; Murchy, Kelsie; Cupp, Aaron R.; Amberg, Jon J.; Gaikowski, Mark P.; Mensinger, Allen F.
2017-01-01
Recent studies have shown the potential of acoustic deterrents against invasive silver carp (Hypophthalmichthys molitrix). This study examined the phonotaxic response of the bighead carp (H. nobilis) to pure tones (500–2000 Hz) and playbacks of broadband sound from an underwater recording of a 100 hp outboard motor (0.06–10 kHz) in an outdoor concrete pond (10 × 5 × 1.2 m) at the U.S. Geological Survey Upper Midwest Environmental Science Center in La Crosse, WI. The number of consecutive times the fish reacted to sound from alternating locations at each end of the pond was assessed. Bighead carp were relatively indifferent to the pure tones with median consecutive responses ranging from 0 to 2 reactions away from the sound source. However, fish consistently exhibited significantly (P < 0.001) greater negative phonotaxis to the broadband sound (outboard motor recording) with an overall median response of 20 consecutive reactions during the 10 min trials. In over 50% of broadband sound tests, carp were still reacting to the stimulus at the end of the trial, implying that fish were not habituating to the sound. This study suggests that broadband sound may be an effective deterrent to bighead carp and provides a basis for conducting studies with wild fish.
Similarities between the irrelevant sound effect and the suffix effect.
Hanley, J Richard; Bourgaize, Jake
2018-03-29
Although articulatory suppression abolishes the effect of irrelevant sound (ISE) on serial recall when sequences are presented visually, the effect persists with auditory presentation of list items. Two experiments were designed to test the claim that, when articulation is suppressed, the effect of irrelevant sound on the retention of auditory lists resembles a suffix effect. A suffix is a spoken word that immediately follows the final item in a list. Even though participants are told to ignore it, the suffix impairs serial recall of auditory lists. In Experiment 1, the irrelevant sound consisted of instrumental music. The music generated a significant ISE that was abolished by articulatory suppression. It therefore appears that, when articulation is suppressed, irrelevant sound must contain speech for it to have any effect on recall. This is consistent with what is known about the suffix effect. In Experiment 2, the effect of irrelevant sound under articulatory suppression was greater when the irrelevant sound was spoken by the same voice that presented the list items. This outcome is again consistent with the known characteristics of the suffix effect. It therefore appears that, when rehearsal is suppressed, irrelevant sound disrupts the acoustic-perceptual encoding of auditorily presented list items. There is no evidence that the persistence of the ISE under suppression is a result of interference to the representation of list items in a postcategorical phonological store.
[Acoustical parameters of toys].
Harazin, Barbara
2010-01-01
Toys play an important role in the development of the sight and hearing concentration in children. They also support the development of manipulation, gently influence a child and excite its emotional activities. A lot of toys emit various sounds. The aim of the study was to assess sound levels produced by sound-emitting toys used by young children. Acoustical parameters of noise were evaluated for 16 sound-emitting plastic toys in laboratory conditions. The noise level was recorded at four different distances, 10, 20, 25 and 30 cm, from the toy. Measurements of A-weighted sound pressure levels and noise levels in octave band in the frequency range from 31.5 Hz to 16 kHz were performed at each distance. Taking into consideration the highest equivalent A-weighted sound levels produced by tested toys, they can be divided into four groups: below 70 dB (6 toys), from 70 to 74 dB (4 toys), from 75 to 84 dB (3 toys) and from 85 to 94 dB (3 toys). The majority of toys (81%) emitted dominant sound levels in octave band at the frequency range from 2 kHz to 4 kHz. Sound-emitting toys produce the highest acoustic energy at the frequency range of the highest susceptibility of the auditory system. Noise levels produced by some toys can be dangerous to children's hearing.
Webster, Paula J.; Skipper-Kallal, Laura M.; Frum, Chris A.; Still, Hayley N.; Ward, B. Douglas; Lewis, James W.
2017-01-01
A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds. PMID:28111538
Stress and fatigue in sound engineers: the effect of broadcasting in a life show and shift work.
Vangelova, Katia K
2008-06-01
The aim was to study the time-of-day variations of cortisol, fatigue and sleep disturbances in sound engineers in relation to job task and shift work. The concentration of saliva cortisol and feeling of stress, sleepiness and fatigue were followed at three hour intervals in 21 sound engineers: 13 sound engineers, aged 45.1 +/- 7.3 years, broadcasting in a life show during fast forward rotating shifts and 8 sound engineers, aged 47.1 +/- 9.8 years, making records in a studio during fast rotating day shifts. Cortisol concentration was assessed in saliva with radioimmunological kits. The participants reported for stress symptoms during the shifts and filled sleep diary. The data were analyzed by tests of between-subjects effects (SPSS). A trend for higher cortisol was found with the group broadcasting in a life show. The sound engineers broadcasting in a life show reported higher scores of stress, sleepiness and fatigue, but no significant differences concerning the sleep disturbances between the groups were found. In conclusion our data show moderate level of stress and fatigue with the studied sound engineers, higher with the subjects broadcasting in a life show. The quality of sleep showed no significant differences between the studied groups, an indication that the sound engineers were able to tolerate the fast forward rotating shifts.
Pervasive Sound Sensing: A Weakly Supervised Training Approach.
Kelly, Daniel; Caulfield, Brian
2016-01-01
Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.
Using electronic storybooks to support word learning in children with severe language impairments.
Smeets, Daisy J H; van Dijken, Marianne J; Bus, Adriana G
2014-01-01
Novel word learning is reported to be problematic for children with severe language impairments (SLI). In this study, we tested electronic storybooks as a tool to support vocabulary acquisition in SLI children. In Experiment 1, 29 kindergarten SLI children heard four e-books each four times: (a) two stories were presented as video books with motion pictures, music, and sounds, and (b) two stories included only static illustrations without music or sounds. Two other stories served as the control condition. Both static and video books were effective in increasing knowledge of unknown words, but static books were most effective. Experiment 2 was designed to examine which elements in video books interfere with word learning: video images or music or sounds. A total of 23 kindergarten SLI children heard 8 storybooks each four times: (a) two static stories without music or sounds, (b) two static stories with music or sounds, (c) two video stories without music or sounds, and (d) two video books with music or sounds. Video images and static illustrations were equally effective, but the presence of music or sounds moderated word learning. In children with severe SLI, background music interfered with learning. Problems with speech perception in noisy conditions may be an underlying factor of SLI and should be considered in selecting teaching aids and learning environments. © Hammill Institute on Disabilities 2012.
Largo-Wight, Erin; O'Hara, Brian K; Chen, W William
2016-10-01
There is a growing recognition that environmental design impacts health and well-being. Nature contact is a design feature or exposure that is especially important in public health and healthcare. To date, there are limited findings on the impact of nature sounds. This experimental study was designed to examine the effect of nature sounds on physiological and psychological stress. Participants were randomized into one of three groups-silence (n = 9), nature sound (n = 17), and classical music (n = 14)-and listened to the assigned sound for 15 min in an office or waiting room-like environment. Pre- and postdata were collected including muscle tension (electromyogram), pulse rate, and self-reported stress. With the exception of pulse rate, there were no statistical differences in baseline or demographics among groups. A paired t-test by group showed a decrease in muscle tension, pulse rate, and self-reported stress in the nature group and no significant differences in the control or the classical music groups. The significant reduction in muscle tension occurred at least by 7 min of listening to the nature sound. This study highlights the potential benefit of even very brief (less than 7 min) exposure to nature sounds. Brief nature sound "booster breaks" are a promising area for future research with important practical implications. © The Author(s) 2016.
Using nonlocal means to separate cardiac and respiration sounds
NASA Astrophysics Data System (ADS)
Rudnitskii, A. G.
2014-11-01
The paper presents the results of applying nonlocal means (NLMs) approach in the problem of separating respiration and cardiac sounds in a signal recorded on a human chest wall. The performance of the algorithm was tested both by simulated and real signals. As a quantitative efficiency measure of NLM filtration, the angle of divergence between isolated and reference signal was used. It is shown that for a wide range of signal-to-noise ratios, the algorithm makes it possible to efficiently solve this problem of separating cardiac and respiration sounds in the sum signal recorded on a human chest wall.
Experimental and theoretical sound transmission. [reduction of interior noise in aircraft
NASA Technical Reports Server (NTRS)
Roskam, J.; Muirhead, V. U.; Smith, H. W.; Durenberger, D. W.
1978-01-01
The capabilities of the Kansas University- Flight Research Center for investigating panel sound transmission as a step toward the reduction of interior noise in general aviation aircraft were discussed. Data obtained on panels with holes, on honeycomb panels, and on various panel treatments at normal incidence were documented. The design of equipment for panel transmission loss tests at nonnormal (slanted) sound incidence was described. A comprehensive theory-based prediction method was developed and shows good agreement with experimental observations of the stiffness controlled, the region, the resonance controlled region, and the mass-law region of panel vibration.
Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality.
Kirchberger, Martin; Russo, Frank A
2016-02-01
A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, C.E.; Abbot, P.; Dyer, I.
1993-01-01
Noise levels from magnetically-levitated trains (maglev) at very high speed may be high enough to cause environmental noise impact in residential areas. Aeroacoustic sources dominate the sound at high speeds and guideway vibrations generate noticeable sound at low speed. In addition to high noise levels, the startle effect as a result of sudden onset of sound from a rapidly moving nearby maglev vehicle may lead to increased annoyance to neighbors of a maglev system. The report provides a base for determining the noise consequences and potential mitigation for a high speed maglev system in populated areas of the United States.more » Four areas are included in the study: (1) definition of noise sources; (2) development of noise criteria; (3) development of design guidelines; and (4) recommendations for a noise testing facility.« less
Auditory steady state response in sound field.
Hernández-Pérez, H; Torres-Fortuny, A
2013-02-01
Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.
Milovanov, Riia; Huotilainen, Minna; Välimäki, Vesa; Esquef, Paulo A A; Tervaniemi, Mari
2008-02-15
The main focus of this study was to examine the relationship between musical aptitude and second language pronunciation skills. We investigated whether children with superior performance in foreign language production represent musical sound features more readily in the preattentive level of neural processing compared with children with less-advanced production skills. Sound processing accuracy was examined in elementary school children by means of event-related potential (ERP) recordings and behavioral measures. Children with good linguistic skills had better musical skills as measured by the Seashore musicality test than children with less accurate linguistic skills. The ERP data accompany the results of the behavioral tests: children with good linguistic skills showed more pronounced sound-change evoked activation with the music stimuli than children with less accurate linguistic skills. Taken together, the results imply that musical and linguistic skills could partly be based on shared neural mechanisms.
Sound velocities in shocked liquid D2 to 28 GPa
NASA Astrophysics Data System (ADS)
Holmes, N. C.; Ross, M.; Nellis, W. J.
1999-06-01
Recent measurements of shock temperatures(N. C. Holmes, W. J. Nellis, and M. Ross, Phys. Rev.) B52, 15835 (1995). and laser-driven Hugoniot measurements(L. B. Da Silva, et al.), Phys. Rev. Lett. 78, 483 (1997). of shocked liquid deuterium strongly indicate that molecular dissociation is important above 20 GPa. Since the amount of expected dissociation is small on the Hugoniot at the 30 GPa limit of conventional impact experiments, other methods must be used to test our understanding of the physics of highly compressed deuterium in this regime. We have recently performed experiments to measure the sound velocity of deuterium which test the isentropic compressibility, c^2 = (partial P/partial ρ)_S. We used the shock overtake method to measure sound velocities at several shock pressures between 10--28 GPa. These data provide support for recently developed molecular dissociation models.
Poulsen, Torben; Oakley, Sebastian
2009-05-01
Hearing threshold sound pressure levels were measured for the Sennheiser HDA 280 audiometric earphone. Hearing thresholds were measured for 25 normal-hearing test subjects at the 11 audiometric test frequencies from 125 Hz to 8000 Hz. Sennheiser HDA 280 is a supra-aural earphone that may be seen as a substitute for the classical Telephonics TDH 39. The results are given as the equivalent threshold sound pressure level (ETSPL) measured in an acoustic coupler specified in IEC 60318-3. The results are in good agreement with an independent investigation from PTB, Braunschweig, Germany. From acoustic laboratory measurements ETSPL values are calculated for the ear simulator specified in IEC 60318-1. Fitting of earphone and coupler is discussed. The data may be used for a future update of the RETSPL standard for supra-aural audiometric earphones, ISO 389-1.
[Industrial sound spectrum entailing noise-induced occupational hearing loss in Iasi industry].
Carp, Cristina Maria; Costinescu, V N
2011-01-01
In European Union every day millions of employees are exposed to noise at work and the risk this can entail. this study presents the sound spectrum in Iasi heavy industry: metal foundries industry, punching and embossing of metal sheets, cold and hot metal processing. it was used a type 2 Sound Level Meter (SLM) and the considered value was the average value over 10 test values in 10 consecutive days for each octave band in common audible frequency range. It is obviously that the large values of sound intensities in the most of frequency octave band exceed maximum admissible and legal values. The study reveals the necessity of hardware, medical and managerial measures in order to reduce the occupational noise and to prevent the hearing acuity damage of the workers.
Path length entropy analysis of diastolic heart sounds.
Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L
2013-09-01
Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.
Liquefaction potential index: Field assessment
Toprak, S.; Holzer, T.L.
2003-01-01
Cone penetration test (CPT) soundings at historic liquefaction sites in California were used to evaluate the predictive capability of the liquefaction potential index (LPI), which was defined by Iwasaki et al. in 1978. LPI combines depth, thickness, and factor of safety of liquefiable material inferred from a CPT sounding into a single parameter. LPI data from the Monterey Bay region indicate that the probability of surface manifestations of liquefaction is 58 and 93%, respectively, when LPI equals or exceeds 5 and 15. LPI values also generally correlate with surface effects of liquefaction: Decreasing from a median of 12 for soundings in lateral spreads to 0 for soundings where no surface effects were reported. The index is particularly promising for probabilistic liquefaction hazard mapping where it may be a useful parameter for characterizing the liquefaction potential of geologic units.
Path Length Entropy Analysis of Diastolic Heart Sounds
Griffel, B.; Zia, M. K.; Fridman, V.; Saponieri, C.; Semmlow, J. L.
2013-01-01
Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multi-scale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%–81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. PMID:23930808
Milovanov, Riia; Huotilainen, Minna; Esquef, Paulo A A; Alku, Paavo; Välimäki, Vesa; Tervaniemi, Mari
2009-08-28
We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia
An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
Microphone Handling Noise: Measurements of Perceptual Threshold and Effects on Audio Quality
Kendrick, Paul; Jackson, Iain R.; Fazenda, Bruno M.; Cox, Trevor J.; Li, Francis F.
2015-01-01
A psychoacoustic experiment was carried out to test the effects of microphone handling noise on perceived audio quality. Handling noise is a problem affecting both amateurs using their smartphones and cameras, as well as professionals using separate microphones and digital recorders. The noises used for the tests were measured from a variety of devices, including smartphones, laptops and handheld microphones. The signal features that characterise these noises are analysed and presented. The sounds include various types of transient, impact noises created by tapping or knocking devices, as well as more sustained sounds caused by rubbing. During the perceptual tests, listeners auditioned speech podcasts and were asked to rate the degradation of any unwanted sounds they heard. A representative design test methodology was developed that tried to encourage everyday rather than analytical listening. Signal-to-noise ratio (SNR) of the handling noise events was shown to be the best predictor of quality degradation. Other factors such as noise type or background noise in the listening environment did not significantly affect quality ratings. Podcast, microphone type and reproduction equipment were found to be significant but only to a small extent. A model allowing the prediction of degradation from the SNR is presented. The SNR threshold at which 50% of subjects noticed handling noise was found to be 4.2 ± 0.6 dBA. The results from this work are important for the understanding of our perception of impact sound and resonant noises in recordings, and will inform the future development of an automated predictor of quality for handling noise. PMID:26473498
Item-nonspecific proactive interference in monkeys' auditory short-term memory.
Bigelow, James; Poremba, Amy
2015-09-01
Recent studies using the delayed matching-to-sample (DMS) paradigm indicate that monkeys' auditory short-term memory (STM) is susceptible to proactive interference (PI). During the task, subjects must indicate whether sample and test sounds separated by a retention interval are identical (match) or not (nonmatch). If a nonmatching test stimulus also occurred on a previous trial, monkeys are more likely to incorrectly make a "match" response (item-specific PI). However, it is not known whether PI may be caused by sounds presented on prior trials that are similar, but nonidentical to the current test stimulus (item-nonspecific PI). This possibility was investigated in two experiments. In Experiment 1, memoranda for each trial comprised tones with a wide range of frequencies, thus minimizing item-specific PI and producing a range of frequency differences among nonidentical tones. In Experiment 2, memoranda were drawn from a set of eight artificial sounds that differed from each other by one, two, or three acoustic dimensions (frequency, spectral bandwidth, and temporal dynamics). Results from both experiments indicate that subjects committed more errors when previously-presented sounds were acoustically similar (though not identical) to the test stimulus of the current trial. Significant effects were produced only by stimuli from the immediately previous trial, suggesting that item-nonspecific PI is less perseverant than item-specific PI, which can extend across noncontiguous trials. Our results contribute to existing human and animal STM literature reporting item-nonspecific PI caused by perceptual similarity among memoranda. Together, these observations underscore the significance of both temporal and discriminability factors in monkeys' STM. Copyright © 2015 Elsevier B.V. All rights reserved.
Auscultation in flight: comparison of conventional and electronic stethoscopes.
Tourtier, J P; Libert, N; Clapson, P; Tazarourte, K; Borne, M; Grasser, L; Debien, B; Auroy, Y
2011-01-01
The ability to auscultate during air medical transport is compromised by high ambient-noise levels. The aim of this study was to assess the capabilities of a traditional and an electronic stethoscope (which is expected to amplify sounds and reduce ambient noise) to assess heart and breath sounds during medical transport in a Boeing C135. We tested one model of a traditional stethoscope (3MTM Littmann Cardiology IIITM) and one model of an electronic stethoscope (3MTM Littmann Stethoscope Model 3000). We studied heart and lung auscultation during real medical evacuations aboard a medically configured C135. For each device, the quality of auscultation was described using a visual rating scale (ranging from 0 to 100 mm, 0 corresponding to "I hear nothing," 100 to "I hear perfectly"). Comparisons were accomplished using a t-test for paired values. A total of 36 comparative evaluations were performed. For cardiac auscultation, the value of the visual rating scale was 53 ± 24 and 85 ± 11 mm, respectively, for the traditional and electronic stethoscope (paired t-test: P = .0024). For lung sounds, quality of auscultation was estimated at 27 ± 17 mm for traditional stethoscope and 68 ± 13 for electronic stethoscope (paired t-test: P = .0003). The electronic stethoscope was considered to be better than the standard model for hearing heart and lung sounds. Flight practitioners involved in air medical evacuation in the C135 aircraft are better able to practice auscultation with this electronic stethoscope than with a traditional one. Copyright © 2011 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.
Jawad, Z; Odumala, A; Jones, M
2012-06-01
Hip injuries are becoming a more common problem as the elderly population increases and their management represents a significant proportion of health care costs. Diagnosis of a fracture based on clinical assessment and plain films is not always conclusive and further investigations for such occult fractures, such as magnetic resonance imaging (MRI), are sometimes required which are expensive and may be difficult to access. Disruption to the conduction of a sound wave travelling through a fractured bone is a concept that has been used to diagnose fractures. In our study we used a tuning fork with frequency of 128 Hz to objectively measure the reduction in sound amplitude in fractured and non-fractured hips. We looked at the feasibility of using this test as a diagnostic tool for neck of femur fractures. A total of 20 patients was included in the study, using MRI scan as the standard for comparison of diagnostic findings. Informed consent was obtained from the patients. There was a significant difference in the amplitude reduction of the sound waves when comparing normal to fractured hips. This was 0.9 in normal hips, compared to 0.31 and 0.18 in intra-capsular and extra-capsular fractures, respectively. Our test was 80% accurate at diagnosing neck of femur fractures. In conclusion this test may be used as a diagnostic test or screening tool in the assessment of occult hip fractures. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Connick, Robert J.
Accurate measurement of normal incident transmission loss is essential for the acoustic characterization of building materials. In this research, a method of measuring normal incidence sound transmission loss proposed by Salissou et al. as a complement to standard E2611-09 of the American Society for Testing and Materials [Standard Test Method for Measurement of Normal Incidence Sound Transmission of Acoustical Materials Based on the Transfer Matrix Method (American Society for Testing and Materials, New York, 2009)] is verified. Two sam- ples from the original literature are used to verify the method as well as a Filtros RTM sample. Following the verification, several nano-material Aerogel samples are measured.
An Overview of Research Activity at the Launch Systems Testbed
NASA Technical Reports Server (NTRS)
Vu, Bruce; Kandula, Max
2003-01-01
This paper summarizes the acoustic testing and analysis activities at the Launch System Testbed (LST) of Kennedy Space Center (KSC). A major goal is to develop passive methods of mitigation of sound from rocket exhaust jets with ducted systems devoid of traditional water injection. Current testing efforts are concerned with the launch-induced vibroacoustic behavior of scaled exhaust jets. Numerical simulations are also developed to study the sound propagation from supersonic jets in free air and through enclosed ducts. Scaling laws accounting for the effects of important parameters such as jet Mach number, jet velocity, and jet temperature on the far-field noise are investigated in order to deduce full-scale environment from small-scale tests.
Haug, Tobias
2011-01-01
There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired in preschool- and school-aged children (4-8 years old) is urgently needed. Using the British Sign Language Receptive Skills Test, that has been standardized and has sound psychometric properties, as a template for adaptation thus provides a starting point for tests of a sign language that is less documented, such as DGS. This article makes a novel contribution to the field by examining linguistic, cultural, and methodological issues in the process of adapting a test from the source language to the target language. The adapted DGS test has sound psychometric properties and provides the basis for revision prior to standardization. © The Author 2011. Published by Oxford University Press. All rights reserved.
Pressure treatment of robusta and ohia posts...final report
Roger G. Skolmen
1973-01-01
Round posts of ohia and robusta pressure-treated with one of two preservatives are being tested for durability at the Makiki Exposure Site, Honolulu, Hawaii. After more than 10½ years, all posts treated with pentachlorophenol are still sound. And all but one robusta post treated with chromated copper aresenate are still sound. Life of untreated posts of both species...
2012-03-12
column than sounds with lower frequencies ( Urick , 1983). Additionally, these systems are generally operated in the vicinity of the sea floor, thus...Water,” TR-76-116, Naval Surface Weapons Center, White Oak, Silver Springs, MD. Urick , R. J. (1983), Principles of Underwater Sound, McGraw-Hill
ERIC Educational Resources Information Center
Shore, Robert Eugene
The effects of two primary reading programs using a programed format (with and without audio-supplement) and a conventional format (the program format deprogramed) in a highly consistent sound-symbol system of reading at three primary grade levels were compared, using a pretest, post-test control group design. The degree of suitability of…
ERIC Educational Resources Information Center
Kudoh, Masaharu; Shibuki, Katsuei
2006-01-01
We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…
ERIC Educational Resources Information Center
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2013-01-01
Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…