Discovery of Sound in the Sea (DOSITS) Website Development
2013-03-04
life affect ocean sound levels? • Science of Sound > Sounds in the Sea > How will ocean acidification affect ocean sound levels? • Science of Sound...Science of Sound > Sounds in the Sea > How does shipping affect ocean sound levels? • Science of Sound > Sounds in the Sea > How does marine
Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z
2016-08-01
We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 1 2013-10-01 2013-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 1 2011-10-01 2011-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 1 2014-10-01 2014-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 1 2012-10-01 2012-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...
Miner, Nadine E.; Caudell, Thomas P.
2004-06-08
A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.
Knight, Lisa; Ladich, Friedrich
2014-11-15
Thorny catfishes produce stridulation (SR) sounds using their pectoral fins and drumming (DR) sounds via a swimbladder mechanism in distress situations when hand held in water and in air. It has been argued that SR and DR sounds are aimed at different receivers (predators) in different media. The aim of this study was to analyse and compare sounds emitted in both air and water in order to test different hypotheses on the functional significance of distress sounds. Five representatives of the family Doradidae were investigated. Fish were hand held and sounds emitted in air and underwater were recorded (number of sounds, sound duration, dominant and fundamental frequency, sound pressure level and peak-to-peak amplitudes). All species produced SR sounds in both media, but DR sounds could not be recorded in air for two species. Differences in sound characteristics between media were small and mainly limited to spectral differences in SR. The number of sounds emitted decreased over time, whereas the duration of SR sounds increased. The dominant frequency of SR and the fundamental frequency of DR decreased and sound pressure level of SR increased with body size across species. The hypothesis that catfish produce more SR sounds in air and more DR sounds in water as a result of different predation pressure (birds versus fish) could not be confirmed. It is assumed that SR sounds serve as distress sounds in both media, whereas DR sounds might primarily be used as intraspecific communication signals in water in species possessing both mechanisms. © 2014. Published by The Company of Biologists Ltd.
Development of an alarm sound database and simulator.
Takeuchi, Akihiro; Hirose, Minoru; Shinbo, Toshiro; Imai, Megumi; Mamorita, Noritaka; Ikeda, Noriaki
2006-10-01
The purpose of this study was to develop an interactive software package of alarm sounds to present, recognize and share problems about alarm sounds among medical staff and medical manufactures. The alarm sounds were recorded in variable alarm conditions in a WAV file. The alarm conditions were arbitrarily induced by modifying attachments of various medical devices. The software package that integrated an alarm sound database and simulator was used to assess the ability to identify the monitor that sounded the alarm for the medical staff. Eighty alarm sound files (40MB in total) were recorded from 41 medical devices made by 28 companies. There were three pairs of similar alarm sounds that could not easily be distinguished, two alarm sounds which had a different priority, either low or high. The alarm sound database was created in an Excel file (ASDB.xls 170 kB, 40 MB with photos), and included a list of file names that were hyperlinked to alarm sound files. An alarm sound simulator (AlmSS) was constructed with two modules for simultaneously playing alarm sound files and for designing new alarm sounds. The AlmSS was used in the assessing procedure to determine whether 19 clinical engineers could identify 13 alarm sounds only by their distinctive sounds. They were asked to choose from a list of devices and to rate the priority of each alarm. The overall correct identification rate of the alarm sounds was 48%, and six characteristic alarm sounds were correctly recognized by beetween 63% to 100% of the subjects. The overall recognition rate of the alarm sound priority was only 27%. We have developed an interactive software package of alarm sounds by integrating the database and the alarm sound simulator (URL: http://info.ahs.kitasato-u.ac.jp/tkweb/alarm/asdb.html ). The AlmSS was useful for replaying multiple alarm sounds simultaneously and designing new alarm sounds interactively.
Geometric Constraints on Human Speech Sound Inventories
Dunbar, Ewan; Dupoux, Emmanuel
2016-01-01
We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296
Using therapeutic sound with progressive audiologic tinnitus management.
Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A
2008-09-01
Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).
NASA Astrophysics Data System (ADS)
Hamilton, Mark F.
1989-08-01
Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.
Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)
NASA Astrophysics Data System (ADS)
Rollo, Audrey K.; Higgs, Dennis M.
2005-04-01
A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.
NASA Astrophysics Data System (ADS)
Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.
2016-08-01
Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Effect of different sound atmospheres on SnO2:Sb thin films prepared by dip coating technique
NASA Astrophysics Data System (ADS)
Kocyigit, Adem; Ozturk, Erhan; Ejderha, Kadir; Turgut, Guven
2017-11-01
Different sound atmosphere effects were investigated on SnO2:Sb thin films, which were deposited with dip coating technique. Two sound atmospheres were used in this study; one of them was nay sound atmosphere for soft sound, another was metallic sound for hard sound. X-ray diffraction (XRD) graphs have indicated that the films have different orientations and structural parameters in quiet room, metallic and soft sound atmospheres. It could be seen from UV-Vis spectrometer measurements that films have different band gaps and optical transmittances with changing sound atmospheres. Scanning electron microscope (SEM) and AFM images of the films have been pointed out that surfaces of films have been affected with changing sound atmospheres. The electrical measurements have shown that films have different I-V plots and different sheet resistances with changing sound atmospheres. These sound effects may be used to manage atoms in nano dimensions.
Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds.
Yamada, Tomomi; Kuwano, Sonoko; Ebisu, Shigeyuki; Hayashi, Mikako
2016-01-01
The sound produced by a dental air turbine handpiece (dental drill) can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq) and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: "metallic and unpleasant" and "powerful". LAeq had a strong relationship with "powerful impression", calculated sharpness was positively related to "metallic impression", and "unpleasant impression" was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating a comfortable sound environment in dental clinics.
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-21
...-AA00 Eighth Coast Guard District Annual Safety Zones; Sound of Independence; Santa Rosa Sound; Fort... Coast Guard will enforce a Safety Zone for the Sound of Independence event in the Santa Rosa Sound, Fort... during the Sound of Independence. During the enforcement period, entry into, transiting or anchoring in...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Albemarle Sound, Pamlico Sound... REGULATIONS § 334.412 Albemarle Sound, Pamlico Sound, Harvey Point and adjacent waters, NC; restricted area. (a) The area. Beginning on the north shore of Albemarle Sound and the easternmost tip of Harvey Point...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 3 2013-07-01 2013-07-01 false Albemarle Sound, Pamlico Sound... REGULATIONS § 334.412 Albemarle Sound, Pamlico Sound, Harvey Point and adjacent waters, NC; restricted area. (a) The area. Beginning on the north shore of Albemarle Sound and the easternmost tip of Harvey Point...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 3 2014-07-01 2014-07-01 false Albemarle Sound, Pamlico Sound... REGULATIONS § 334.412 Albemarle Sound, Pamlico Sound, Harvey Point and adjacent waters, NC; restricted area. (a) The area. Beginning on the north shore of Albemarle Sound and the easternmost tip of Harvey Point...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 3 2012-07-01 2012-07-01 false Albemarle Sound, Pamlico Sound... REGULATIONS § 334.412 Albemarle Sound, Pamlico Sound, Harvey Point and adjacent waters, NC; restricted area. (a) The area. Beginning on the north shore of Albemarle Sound and the easternmost tip of Harvey Point...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 3 2011-07-01 2011-07-01 false Albemarle Sound, Pamlico Sound... REGULATIONS § 334.412 Albemarle Sound, Pamlico Sound, Harvey Point and adjacent waters, NC; restricted area. (a) The area. Beginning on the north shore of Albemarle Sound and the easternmost tip of Harvey Point...
Behaviours Associated with Acoustic Communication in Nile Tilapia (Oreochromis niloticus)
Longrie, Nicolas; Poncin, Pascal; Denoël, Mathieu; Gennotte, Vincent; Delcourt, Johann; Parmentier, Eric
2013-01-01
Background Sound production is widespread among fishes and accompanies many social interactions. The literature reports twenty-nine cichlid species known to produce sounds during aggressive and courtship displays, but the precise range in behavioural contexts is unclear. This study aims to describe the various Oreochromis niloticus behaviours that are associated with sound production in order to delimit the role of sound during different activities, including agonistic behaviours, pit activities, and reproduction and parental care by males and females of the species. Methodology/Principal Findings Sounds mostly occur during the day. The sounds recorded during this study accompany previously known behaviours, and no particular behaviour is systematically associated with sound production. Males and females make sounds during territorial defence but not during courtship and mating. Sounds support visual behaviours but are not used alone. During agonistic interactions, a calling Oreochromis niloticus does not bite after producing sounds, and more sounds are produced in defence of territory than for dominating individuals. Females produce sounds to defend eggs but not larvae. Conclusion/Significance Sounds are produced to reinforce visual behaviours. Moreover, comparisons with O. mossambicus indicate two sister species can differ in their use of sound, their acoustic characteristics, and the function of sound production. These findings support the role of sounds in differentiating species and promoting speciation. They also make clear that the association of sounds with specific life-cycle roles cannot be generalized to the entire taxa. PMID:23620756
Meteorological effects on long-range outdoor sound propagation
NASA Technical Reports Server (NTRS)
Klug, Helmut
1990-01-01
Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.
The sound symbolism bootstrapping hypothesis for language acquisition and language evolution
Imai, Mutsumi; Kita, Sotaro
2014-01-01
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. PMID:25092666
Psychophysiological acoustics of indoor sound due to traffic noise during sleep
NASA Astrophysics Data System (ADS)
Tulen, J. H. M.; Kumar, A.; Jurriëns, A. A.
1986-10-01
The relation between the physical characteristics of sound and an individual's perception of its as annoyance is complex and unclear. Sleep disturbance by sound is manifested in the physiological responses to the sound stimuli and the quality of sleep perceived in the morning. Both may result in deterioration of functioning during wakefulness. Therefore, psychophysiological responses to noise during sleep should be studied for the evaluation of the efficacy of sound insulation. Nocturnal sleep and indoor sound level were recorded in the homes of 12 subjects living along a highway with high traffic density. Double glazing sound insulation was used to create two experimental conditions: low insulation and high insulation. Twenty recordings were made per subject, ten recordings in each condition. During the nights with low insulation the quality of sleep was so low that both performance and mood were negatively affected. The enhancement of sound insulation was not effective enough to increase the restorative effects of sleep. The transient and peaky characteristics of traffic sound were also found to result in non-adaptive physiological responses during sleep. Sound insulation did have an effect on noise peak characteristics such as peak level, peak duration and slope. However, the number of sound peaks were found to be the same in both conditions. The relation of these sound peaks detected in the indoor recorded sound level signal to characteristics of passing vehicles was established, indicating that the sound peaks causing the psychophysiological disturbances during sleep were generated by the passing vehicles. Evidence is presented to show that the reduction in sound level is not a good measure of efficacy of sound insulation. The parameters of the sound peaks, as described in this paper, are a better representation of psychophysiological efficacy of sound insulation.
Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-03-13
A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D
2009-10-01
By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.
Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C
2015-09-01
The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Miner, Nadine Elizabeth
1998-09-01
This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.
Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task
Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.
2012-01-01
To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030
75 FR 69429 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-12
... Sound Energy, Inc. Description: Puget Sound Energy, Inc. submits tariff filing per 35.12: PSE Original...: ER11-2008-000. Applicants: Puget Sound Energy, Inc. Description: Puget Sound Energy, Inc. submits... Sound Energy, Inc. Description: Puget Sound Energy, Inc. submits tariff filing per 35.12: PSE Original...
Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds
Yamada, Tomomi; Kuwano, Sonoko; Ebisu, Shigeyuki; Hayashi, Mikako
2016-01-01
The sound produced by a dental air turbine handpiece (dental drill) can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq) and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: “metallic and unpleasant” and “powerful”. LAeq had a strong relationship with “powerful impression”, calculated sharpness was positively related to “metallic impression”, and “unpleasant impression” was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating a comfortable sound environment in dental clinics. PMID:27462903
Vocal Imitations of Non-Vocal Sounds
Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick
2016-01-01
Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations. PMID:27992480
33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.
Code of Federal Regulations, 2012 CFR
2012-07-01
... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...
33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.
Code of Federal Regulations, 2014 CFR
2014-07-01
... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...
33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.
Code of Federal Regulations, 2013 CFR
2013-07-01
... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...
33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.
Code of Federal Regulations, 2010 CFR
2010-07-01
... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...
33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST... SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...
Effects of capacity limits, memory loss, and sound type in change deafness.
Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S
2017-11-01
Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.
The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.
Imai, Mutsumi; Kita, Sotaro
2014-09-19
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Continuous robust sound event classification using time-frequency features and deep learning
Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478
Salomons, Erik M.; Lohman, Walter J. A.; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631
Salomons, Erik M; Lohman, Walter J A; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.
Continuous robust sound event classification using time-frequency features and deep learning.
McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Assessment and improvement of sound quality in cochlear implant users
Caldwell, Meredith T.; Jiam, Nicole T.
2017-01-01
Objectives Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Data Sources Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Results Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant‐mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI‐MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. Conclusions In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. Level of Evidence NA PMID:28894831
On the effectiveness of vocal imitations and verbal descriptions of sounds.
Lemaitre, Guillaume; Rocchesso, Davide
2014-02-01
Describing unidentified sounds with words is a frustrating task and vocally imitating them is often a convenient way to address the issue. This article reports on a study that compared the effectiveness of vocal imitations and verbalizations to communicate different referent sounds. The stimuli included mechanical and synthesized sounds and were selected on the basis of participants' confidence in identifying the cause of the sounds, ranging from easy-to-identify to unidentifiable sounds. The study used a selection of vocal imitations and verbalizations deemed adequate descriptions of the referent sounds. These descriptions were used in a nine-alternative forced-choice experiment: Participants listened to a description and picked one sound from a list of nine possible referent sounds. Results showed that recognition based on verbalizations was maximally effective when the referent sounds were identifiable. Recognition accuracy with verbalizations dropped when identifiability of the sounds decreased. Conversely, recognition accuracy with vocal imitations did not depend on the identifiability of the referent sounds and was as high as with the best verbalizations. This shows that vocal imitations are an effective means of representing and communicating sounds and suggests that they could be used in a number of applications.
Tuning the cognitive environment: Sound masking with 'natural' sounds in open-plan offices
NASA Astrophysics Data System (ADS)
DeLoach, Alana
With the gain in popularity of open-plan office design and the engineering efforts to achieve acoustical comfort for building occupants, a majority of workers still report dissatisfaction in their workplace environment. Office acoustics influence organizational effectiveness, efficiency, and satisfaction through meeting appropriate requirements for speech privacy and ambient sound levels. Implementing a sound masking system is one tried-and-true method of achieving privacy goals. Although each sound masking system is tuned for its specific environment, the signal -- random steady state electronic noise, has remained the same for decades. This research work explores how `natural' sounds may be used as an alternative to this standard masking signal employed so ubiquitously in sound masking systems in the contemporary office environment. As an unobtrusive background sound, possessing the appropriate spectral characteristics, this proposed use of `natural' sounds for masking challenges the convention that masking sounds should be as meaningless as possible. Through the pilot study presented in this work, we hypothesize that `natural' sounds as sound maskers will be as effective at masking distracting background noise as the conventional masking sound, will enhance cognitive functioning, and increase participant (worker) satisfaction.
A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene
Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling
2016-01-01
A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239
NASA Astrophysics Data System (ADS)
Waheed, R.; Tarar, W.; Saeed, H. A.
2016-08-01
Sound proof canopies for diesel power generators are fabricated with a layer of sound absorbing material applied to all the inner walls. The physical properties of the majority of commercially available sound proofing materials reveal that a material with high sound absorption coefficient has very low thermal conductivity. Consequently a good sound absorbing material is also a good heat insulator. In this research it has been found through various experiments that ordinary sound proofing materials tend to rise the inside temperature of sound proof enclosure in certain turbo engines by capturing the heat produced by engine and not allowing it to be transferred to atmosphere. The same phenomenon is studied by creating a finite element model of the sound proof enclosure and performing a steady state and transient thermal analysis. The prospects of using aluminium foam as sound proofing material has been studied and it is found that inside temperature of sound proof enclosure can be cut down to safe working temperature of power generator engine without compromise on sound proofing.
NASA Astrophysics Data System (ADS)
Huang, Xianfeng; Meng, Yao; Huang, Riming
2017-10-01
This paper describes a theoretical method for predicting the improvement of the impact sound insulation to a floating floor with the resilient interlayer. Statistical energy analysis (SEA) model, which is skilful in calculating the floor impact sound, is set up for calculating the reduction in impact sound pressure level in downstairs room. The sound transmission paths which include direct path and flanking paths are analyzed to find the dominant one; the factors that affect impact sound reduction for a floating floor are explored. Then, the impact sound level in downstairs room is determined and comparisons between predicted and measured data are conducted. It is indicated that for the impact sound transmission across a floating floor, the flanking path impact sound level contribute tiny influence on overall sound level in downstairs room, and a floating floor with low stiffness interlayer exhibits favorable sound insulation on direct path. The SEA approach applies to the floating floors with resilient interlayers, which are experimentally verified, provides a guidance in sound insulation design.
Effect of real-world sounds on protein crystallization.
Zhang, Chen-Yan; Liu, Yue; Tian, Xu-Hua; Liu, Wen-Jing; Li, Xiao-Yu; Yang, Li-Xue; Jiang, Han-Jun; Han, Chong; Chen, Ke-An; Yin, Da-Chuan
2018-06-01
Protein crystallization is sensitive to the environment, while audible sound, as a physical and environmental factor during the entire process, is always ignored. We have previously reported that protein crystallization can be affected by a computer-generated monotonous sound with fixed frequency and amplitude. However, real-world sounds are not so simple but are complicated by parameters (frequency, amplitude, timbre, etc.) that vary over time. In this work, from three sound categories (music, speech, and environmental sound), we selected 26 different sounds and evaluated their effects on protein crystallization. The correlation between the sound parameters and the crystallization success rate was studied mathematically. The results showed that the real-world sounds, similar to the artificial monotonous sounds, could not only affect protein crystallization, but also improve crystal quality. Crystallization was dependent not only on the frequency, amplitude, volume, irradiation time, and overall energy of the sounds but also on their spectral characteristics. Based on these results, we suggest that intentionally applying environmental sound may be a simple and useful tool to promote protein crystallization. Copyright © 2018. Published by Elsevier B.V.
Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun
2018-01-01
Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena.
Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun
2018-01-01
Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena. PMID:29507834
Kellam, Barbara; Bhatia, Jatinder
2009-08-01
Few noise measurement studies in the neonatal intensive care unit have reported sound frequencies within incubators. Sound frequencies within incubators are markedly different from sound frequencies within the gravid uterus. This article reports the results of sound spectral analysis (SSA) within unoccupied incubators under control and treatment conditions. SSA indicated that acoustical foam panels (treatment condition) markedly reduced sound frequencies > or =500 Hz when compared with the control condition. The main findings of this study (a) illustrate the need to monitor high-frequency sound within incubators and (b) indicate one method to reduce atypical sound exposure within incubators.
NASA Astrophysics Data System (ADS)
Eshach, Haim
2014-06-01
This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound has material properties, and sound has process properties. The final SCII consists of 71 statements that respondents rate as either true or false and also indicate their confidence on a five-point scale. Administration to 355 middle school students resulted in a Cronbach alpha of 0.906, suggesting a high reliability. In addition, the average percentage of students' answers to statements that associate sound with material properties is significantly higher than the average percentage of statements associating sound with process properties (p <0.001). The SCII is a valid and reliable tool that can be used to determine students' conceptions of sound.
Neighing, barking, and drumming horses-object related sounds help and hinder picture naming.
Mädebach, Andreas; Wöhner, Stefan; Kieseler, Marie-Luise; Jescheniak, Jörg D
2017-09-01
The study presented here investigated how environmental sounds influence picture naming. In a series of four experiments participants named pictures (e.g., the picture of a horse) while hearing task-irrelevant sounds (e.g., neighing, barking, or drumming). Experiments 1 and 2 established two findings, facilitation from congruent sounds (e.g., picture: horse, sound: neighing) and interference from semantically related sounds (e.g., sound: barking), both relative to unrelated sounds (e.g., sound: drumming). Experiment 3 replicated the effects in a situation in which participants were not familiarized with the sounds prior to the experiment. Experiment 4 replicated the congruency facilitation effect, but showed that semantic interference was not obtained with distractor sounds which were not associated with target pictures (i.e., were not part of the response set). The general pattern of facilitation from congruent sound distractors and interference from semantically related sound distractors resembles the pattern commonly observed with distractor words. This parallelism suggests that the underlying processes are not specific to either distractor words or distractor sounds but instead reflect general aspects of semantic-lexical selection in language production. The results indicate that language production theories need to include a competitive selection mechanism at either the lexical processing stage, or the prelexical processing stage, or both. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Huber, Annika; Barber, Anjuli L A; Faragó, Tamás; Müller, Corsin A; Huber, Ludwig
2017-07-01
Emotional contagion, a basic component of empathy defined as emotional state-matching between individuals, has previously been shown in dogs even upon solely hearing negative emotional sounds of humans or conspecifics. The current investigation further sheds light on this phenomenon by directly contrasting emotional sounds of both species (humans and dogs) as well as opposed valences (positive and negative) to gain insights into intra- and interspecies empathy as well as differences between positively and negatively valenced sounds. Different types of sounds were played back to measure the influence of three dimensions on the dogs' behavioural response. We found that dogs behaved differently after hearing non-emotional sounds of their environment compared to emotional sounds of humans and conspecifics ("Emotionality" dimension), but the subjects responded similarly to human and conspecific sounds ("Species" dimension). However, dogs expressed more freezing behaviour after conspecific sounds, independent of the valence. Comparing positively with negatively valenced sounds of both species ("Valence" dimension), we found that, independent of the species from which the sound originated, dogs expressed more behavioural indicators for arousal and negatively valenced states after hearing negative emotional sounds. This response pattern indicates emotional state-matching or emotional contagion for negative sounds of humans and conspecifics. It furthermore indicates that dogs recognized the different valences of the emotional sounds, which is a promising finding for future studies on empathy for positive emotional states in dogs.
Aubert, A E; Denys, B G; Meno, F; Reddy, P S
1985-05-01
Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.
Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study.
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.
Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEG Study
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period. PMID:24278270
Radiation characteristics of multiple and single sound hole vihuelas and a classical guitar.
Bader, Rolf
2012-01-01
Two recently built vihuelas, quasi-replicas of the Spanish Renaissance guitar, one with a small body and one sound hole and one with a large body with five sound holes, together with a classical guitar are investigated. Frequency dependent radiation strengths are measured using a 128 microphone array, back-propagating the frequency dependent sound field upon the body surface. All three instruments have a strong sound hole radiation within the low frequency range. Here the five tone holes vihuela has a much wider frequency region of strong sound hole radiation up to about 500 Hz, whereas the single hole instruments only have strong sound hole radiations up to about 300 Hz due to the enlarged radiation area of the sound holes. The strong broadband radiation of the five sound hole vihuela up to about 500 Hz is also caused by the sound hole phases, showing very consistent in-phase relations up to this frequency range. Also the radiation strength of the sound holes placed nearer to the center of the sound box are much stronger than those near the ribs, pointing to a strong position dependency of sound hole to radiation strength. The Helmholtz resonance frequency of the five sound hole vihuela is influenced by this difference in radiation strength but not by the rosettas, which only have a slight effect on the Helmholtz frequency. © 2012 Acoustical Society of America.
ERIC Educational Resources Information Center
Allen, Robert L.; And Others
This handbook introduces the important correspondences existing between English sounds and English spelling patterns. The lessons present the vowel sounds, one by one, along with systematically selected consonant sounds, and show how each sound or combination of sounds is usually spelled in English words. Irregularly spelled words are introduced…
Nearshore Birds in Puget Sound
2006-05-01
Published by Seattle District, U.S. Army Corps of Engineers, Seattle, Washington. Kriete, B. 2007. Orcas in Puget Sound . Puget Sound Near- shore...Technical Report 2006-05 Puget Sound Nearshore Partnership I Nearshore Birds in Puget Sound Prepared in...support of the Puget Sound Nearshore Partnership Joseph B. Buchanan Washington Department of Fish and Wildlife Technical Report 2006-05 ii
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2003-01-01
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
A simple computer-based measurement and analysis system of pulmonary auscultation sounds.
Polat, Hüseyin; Güler, Inan
2004-12-01
Listening to various lung sounds has proven to be an important diagnostic tool for detecting and monitoring certain types of lung diseases. In this study a computer-based system has been designed for easy measurement and analysis of lung sound using the software package DasyLAB. The designed system presents the following features: it is able to digitally record the lung sounds which are captured with an electronic stethoscope plugged to a sound card on a portable computer, display the lung sound waveform for auscultation sites, record the lung sound into the ASCII format, acoustically reproduce the lung sound, edit and print the sound waveforms, display its time-expanded waveform, compute the Fast Fourier Transform (FFT), and display the power spectrum and spectrogram.
Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C
2013-05-21
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2007-10-16
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Development of an ICT-Based Air Column Resonance Learning Media
NASA Astrophysics Data System (ADS)
Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut
2016-08-01
Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.
Material sound source localization through headphones
NASA Astrophysics Data System (ADS)
Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada
2012-09-01
In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.
Neonatal incubators: a toxic sound environment for the preterm infant?*.
Marik, Paul E; Fuller, Christopher; Levitov, Alexander; Moll, Elizabeth
2012-11-01
High sound pressure levels may be harmful to the maturing newborn. Current guidelines suggest that the sound pressure levels within a neonatal intensive care unit should not exceed 45 dB(A). It is likely that environmental noise as well as the noise generated by the incubator fan and respiratory equipment may contribute to the total sound pressure levels. Knowledge of the contribution of each component and source is important to develop effective strategies to reduce noise within the incubator. The objectives of this study were to determine the sound levels, sound spectra, and major sources of sound within a modern neonatal incubator (Giraffe Omnibed; GE Healthcare, Helsinki, Finland) using a sound simulation study to replicate the conditions of a preterm infant undergoing high-frequency jet ventilation (Life Pulse, Bunnell, UT). Using advanced sound data acquisition and signal processing equipment, we measured and analyzed the sound level at a dummy infant's ear and at the head level outside the enclosure. The sound data time histories were digitally acquired and processed using a digital Fast Fourier Transform algorithm to provide spectra of the sound and cumulative sound pressure levels (dBA). The simulation was done with the incubator cooling fan and ventilator switched on or off. In addition, tests were carried out with the enclosure sides closed and hood down and then with the enclosure sides open and the hood up to determine the importance of interior incubator reverberance on the interior sound levels With all the equipment off and the hood down, the sound pressure levels were 53 dB(A) inside the incubator. The sound pressure levels increased to 68 dB(A) with all equipment switched on (approximately 10 times louder than recommended). The sound intensity was 6.0 × 10(-8) watts/m(2); this sound level is roughly comparable with that generated by a kitchen exhaust fan on high. Turning the ventilator off reduced the overall sound pressure levels to 64 dB(A) and the sound pressure levels in the low-frequency band of 0 to 100 Hz were reduced by 10 dB(A). The incubator fan generated tones at 200, 400, and 600 Hz that raised the sound level by approximately 2 dB(A)-3 dB(A). Opening the enclosure (with all equipment turned on) reduced the sound levels above 50 Hz by reducing the revereberance within the enclosure. The sound levels, especially at low frequencies, within a modern incubator may reach levels that are likely to be harmful to the developing newborn. Much of the noise is at low frequencies and thus difficult to reduce by conventional means. Therefore, advanced forms of noise control are needed to address this issue.
33 CFR 67.10-40 - Sound signals authorized for use prior to January 1, 1973.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., and 67.10-10, if the sound signal has a minimum sound pressure level as specified in Table A of... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals authorized for use... STRUCTURES General Requirements for Sound signals § 67.10-40 Sound signals authorized for use prior to...
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
ERIC Educational Resources Information Center
Eshach, Haim
2014-01-01
This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
...-AA08 Special Local Regulation, Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... permanent Special Local Regulation on the navigable waters of Long Island Sound between Port Jefferson, NY and Captain's Cove Seaport, Bridgeport, CT due to the annual Swim Across the Sound event. The proposed...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-18
...-AA08 Special Local Regulation; Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... Guard is establishing a permanent Special Local Regulation on the navigable waters of Long Island Sound... Sound event. This special local regulation is necessary to provide for the safety of life by protecting...
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
Dr. Seuss's Sound Words: Playing with Phonics and Spelling.
ERIC Educational Resources Information Center
Gardner, Traci
Boom! Br-r-ring! Cluck! Moo!--exciting sounds are everywhere. Whether visiting online sites that play sounds or taking a "sound hike," ask your students to notice the sounds they hear, then write their own book, using sound words, based on Dr. Seuss's "Mr. Brown Can MOO! Can You?" During the three 45-minute sessions, grade K-2…
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
33 CFR 67.10-15 - Approval of sound signals.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Approval of sound signals. 67.10... Sound signals § 67.10-15 Approval of sound signals. (a) The Coast Guard approves a sound signal if: (1) It meets the requirements for sound signals in § 67.10-1 (a), (b), (c), (d), and (e) when tested...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain's Cove Seaport, Bridgeport, CT. 100.121 Section 100.121... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.121 Swim Across the Sound, Long Island Sound, Port Jefferson, NY...
Sound absorption of metallic sound absorbers fabricated via the selective laser melting process
NASA Astrophysics Data System (ADS)
Cheng, Li-Wei; Cheng, Chung-Wei; Chung, Kuo-Chun; Kam, Tai-Yan
2017-01-01
The sound absorption capability of metallic sound absorbers fabricated using the additive manufacturing (selective laser melting) method is investigated via both the experimental and theoretical approaches. The metallic sound absorption structures composed of periodic cubic cells were made of laser-melted Ti6Al4 V powder. The acoustic impedance equations with different frequency-independent and frequency-dependent end corrections factors are employed to calculate the theoretical sound absorption coefficients of the metallic sound absorption structures. The calculated sound absorption coefficients are in close agreement with the experimental results for the frequencies ranging from 2 to 13 kHz.
WODA Technical Guidance on Underwater Sound from Dredging.
Thomsen, Frank; Borsani, Fabrizio; Clarke, Douglas; de Jong, Christ; de Wit, Pim; Goethals, Fredrik; Holtkamp, Martine; Martin, Elena San; Spadaro, Philip; van Raalte, Gerard; Victor, George Yesu Vedha; Jensen, Anders
2016-01-01
The World Organization of Dredging Associations (WODA) has identified underwater sound as an environmental issue that needs further consideration. A WODA Expert Group on Underwater Sound (WEGUS) prepared a guidance paper in 2013 on dredging sound, including a summary of potential impacts on aquatic biota and advice on underwater sound monitoring procedures. The paper follows a risk-based approach and provides guidance for standardization of acoustic terminology and methods for data collection and analysis. Furthermore, the literature on dredging-related sounds and the effects of dredging sounds on marine life is surveyed and guidance on the management of dredging-related sound risks is provided.
Some aspects of coupling-induced sound absorption in enclosures.
Sum, K S; Pan, J
2003-08-01
It is known that the coupling between a modally reactive boundary structure of an enclosure and the enclosed sound field induces absorption in the sound field. However, the effect of this absorption on the sound-field response can vary significantly, even when material properties of the structure and dimensions of the coupled system are not changed. Although there have been numerous investigations of coupling between a structure and an enclosed sound field, little work has been done in the area of sound absorption induced by the coupling. Therefore, characteristics of the absorption are not well understood and the extent of its influence on the behavior of the sound-field response is not clearly known. In this paper, the coupling of a boundary structure and an enclosed sound field in frequency bands above the low-frequency range is considered. Three aspects of the coupling-induced sound absorption are studied namely, the effects of exciting either the structure or the sound field directly, damping in the uncoupled sound field and damping in the uncoupled structure. The results provide an understanding of some features of the coupling-induced absorption and its significance to the sound-field response.
The influence of company identity on the perception of vehicle sounds.
Humphreys, Louise; Giudice, Sebastiano; Jennings, Paul; Cain, Rebecca; Song, Wookeun; Dunne, Garry
2011-04-01
In order to determine how the interior of a car should sound, automotive manufacturers often rely on obtaining data from individual evaluations of vehicle sounds. Company identity could play a role in these appraisals, particularly when individuals are comparing cars from opposite ends of the performance spectrum. This research addressed the question: does company identity influence the evaluation of automotive sounds belonging to cars of a similar performance level and from the same market segment? Participants listened to car sounds from two competing manufacturers, together with control sounds. Before listening to each sound, participants were presented with the correct company identity for that sound, the incorrect identity or were given no information about the identity of the sound. The results showed that company identity did not influence appraisals of high performance cars belonging to different manufacturers. These results have positive implications for methodologies employed to capture the perceptions of individuals. STATEMENT OF RELEVANCE: A challenge in automotive design is to set appropriate targets for vehicle sounds, relying on understanding subjective reactions of individuals to such sounds. This paper assesses the role of company identity in influencing these subjective reactions and will guide sound evaluation studies, in which the manufacturer is often apparent.
Sound and vibration sensitivity of VIIIth nerve fibers in the grassfrog, Rana temporaria.
Christensen-Dalsgaard, J; Jørgensen, M B
1996-10-01
We have studied the sound and vibration sensitivity of 164 amphibian papilla fibers in the VIIIth nerve of the grassfrog, Rana temporaria. The VIIIth nerve was exposed using a dorsal approach. The frogs were placed in a natural sitting posture and stimulated by free-field sound. Furthermore, the animals were stimulated with dorso-ventral vibrations, and the sound-induced vertical vibrations in the setup could be canceled by emitting vibrations in antiphase from the vibration exciter. All low-frequency fibers responded to both sound and vibration with sound thresholds from 23 dB SPL and vibration thresholds from 0.02 cm/s2. The sound and vibration sensitivity was compared for each fiber using the offset between the rate-level curves for sound and vibration stimulation as a measure of relative vibration sensitivity. When measured in this way relative vibration sensitivity decreases with frequency from 42 dB at 100 Hz to 25 dB at 400 Hz. Since sound thresholds decrease from 72 dB SPL at 100 Hz to 50 dB SPL at 400 Hz the decrease in relative vibration sensitivity reflects an increase in sound sensitivity with frequency, probably due to enhanced tympanic sensitivity at higher frequencies. In contrast, absolute vibration sensitivity is constant in most of the frequency range studied. Only small effects result from the cancellation of sound-induced vibrations. The reason for this probably is that the maximal induced vibrations in the present setup are 6-10 dB below the fibers' vibration threshold at the threshold for sound. However, these results are only valid for the present physical configuration of the setup and the high vibration-sensitivities of the fibers warrant caution whenever the auditory fibers are stimulated with free-field sound. Thus, the experiments suggest that the low-frequency sound sensitivity is not caused by sound-induced vertical vibrations. Instead, the low-frequency sound sensitivity is either tympanic or mediated through bone conduction or sound-induced pulsations of the lungs.
NASA Astrophysics Data System (ADS)
West, Eva; Wallin, Anita
2013-04-01
Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.
Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N.
2012-01-01
Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients. PMID:22891070
The impact of artificial vehicle sounds for pedestrians on driver stress.
Cottrell, Nicholas D; Barton, Benjamin K
2012-01-01
Electrically based vehicles have produced some concern over their lack of sound, but the impact of artificial sounds now being implemented have not been examined in respect to their effects upon the driver. The impact of two different implementations of vehicle sound on driver stress in electric vehicles was examined. A Nissan HEV running in electric vehicle mode was driven by participants in an area of congestion using three sound implementations: (1) no artificial sounds, (2) manually engaged sounds and (3) automatically engaged sounds. Physiological and self-report questionnaire measures were collected to determine stress and acceptance of the automated sound protocol. Driver stress was significantly higher in the manually activated warning condition, compared to both no artificial sounds and automatically engaged sounds. Implications for automation usage and measurement methods are discussed and future research directions suggested. The advent of hybrid- and all-electric vehicles has created a need for artificial warning signals for pedestrian safety that place task demands on drivers. We investigated drivers' stress differences in response to varying conditions of warning signals for pedestrians. Driver stress was lower when noises were automated.
Sound therapy for tinnitus management: practicable options.
Hoare, Derek J; Searchfield, Grant D; El Refaie, Amr; Henry, James A
2014-01-01
The authors reviewed practicable options of sound therapy for tinnitus, the evidence base for each option, and the implications of each option for the patient and for clinical practice. To provide a general guide to selecting sound therapy options in clinical practice. Practicable sound therapy options. Where available, peer-reviewed empirical studies, conference proceedings, and review studies were examined. Material relevant to the purpose was summarized in a narrative. The number of peer-reviewed publications pertaining to each sound therapy option reviewed varied significantly (from none to over 10). Overall there is currently insufficient evidence to support or refute the routine use of individual sound therapy options. It is likely, however, that sound therapy combined with education and counseling is generally helpful to patients. Clinicians need to be guided by the patient's point of care, patient motivation and expectations of sound therapy, and the acceptability of the intervention both in terms of the sound stimuli they are to use and whether they are willing to use sound extensively or intermittently. Clinicians should also clarify to patients the role sound therapy is expected to play in the management plan. American Academy of Audiology.
Recurring patterns in the songs of humpback whales (Megaptera novaeangliae).
Green, Sean R; Mercado, Eduardo; Pack, Adam A; Herman, Louis M
2011-02-01
Humpback whales, unlike most mammalian species, learn new songs as adults. Populations of singers progressively and collectively change the sounds and patterns within their songs throughout their lives and across generations. In this study, humpback whale songs recorded in Hawaii from 1985 to 1995 were analyzed using self-organizing maps (SOMs) to classify the sounds within songs, and to identify sound patterns that were present across multiple years. These analyses supported the hypothesis that recurring, persistent patterns exist within whale songs, and that these patterns are defined at least in part by acoustic relationships between adjacent sounds within songs. Sound classification based on acoustic differences between adjacent sounds yielded patterns within songs that were more consistent from year to year than classifications based on the properties of single sounds. Maintenance of fixed ratios of acoustic modulation across sounds, despite large variations in individual sounds, suggests intrinsic constraints on how sounds change within songs. Such acoustically invariant cues may enable whales to recognize and assess variations in songs despite propagation-related distortion of individual sounds and yearly changes in songs. Copyright © 2011 Elsevier B.V. All rights reserved.
Bao, Shaowen; Chang, Edward F.; Teng, Ching-Ling; Heiser, Marc A.; Merzenich, Michael M.
2013-01-01
Cortical sensory representations can be reorganized by sensory exposure in an epoch of early development. The adaptive role of this type of plasticity for natural sounds in sensory development is, however, unclear. We have reared rats in a naturalistic, complex acoustic environment and examined their auditory representations. We found that cortical neurons became more selective to spectrotemporal features in the experienced sounds. At the neuronal population level, more neurons were involved in representing the whole set of complex sounds, but fewer neurons actually responded to each individual sound, but with greater magnitudes. A comparison of population-temporal responses to the experienced complex sounds revealed that cortical responses to different renderings of the same song motif were more similar, indicating that the cortical neurons became less sensitive to natural acoustic variations associated with stimulus context and sound renderings. By contrast, cortical responses to sounds of different motifs became more distinctive, suggesting that cortical neurons were tuned to the defining features of the experienced sounds. These effects lead to emergent “categorical” representations of the experienced sounds, which presumably facilitate their recognition. PMID:23747304
Research on fiber Bragg grating heart sound sensing and wavelength demodulation method
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Miao, Chang-Yun; Gao, Hua; Gan, Jing-Meng; Li, Hong-Qiang
2010-11-01
Heart sound includes a lot of physiological and pathological information of heart and blood vessel. Heart sound detecting is an important method to gain the heart status, and has important significance to early diagnoses of cardiopathy. In order to improve sensitivity and reduce noise, a heart sound measurement method based on fiber Bragg grating was researched. By the vibration principle of plane round diaphragm, a heart sound sensor structure of fiber Bragg grating was designed and a heart sound sensing mathematical model was established. A formula of heart sound sensitivity was deduced and the theoretical sensitivity of the designed sensor is 957.11pm/KPa. Based on matched grating method, the experiment system was built, by which the excursion of reflected wavelength of the sensing grating was detected and the information of heart sound was obtained. Experiments show that the designed sensor can detect the heart sound and the reflected wavelength variety range is about 70pm. When the sampling frequency is 1 KHz, the extracted heart sound waveform by using the db4 wavelet has the same characteristics with a standard heart sound sensor.
Marine Forage Fishes in Puget Sound
2007-03-01
Orcas in Puget Sound . Puget Sound Near- shore Partnership Report No. 2007-01. Published by Seattle District, U.S. Army Corps of Engineers, Seattle...Technical Report 2007-03 Marine Forage Fishes in Puget Sound Prepared in support of the Puget Sound Nearshore Partnership Dan Penttila Washington...Forage Fishes in Puget Sound Valued Ecosystem Components Report Series Front cover: Pacific herring (courtesy of Washington Sea Grant). Back cover
Disher, Timothy C; Benoit, Britney; Inglis, Darlene; Burgess, Stacy A; Ellsmere, Barbara; Hewitt, Brenda E; Bishop, Tanya M; Sheppard, Christopher L; Jangaard, Krista A; Morrison, Gavin C; Campbell-Yeo, Marsha L
To identify baseline sound levels, patterns of sound levels, and potential barriers and facilitators to sound level reduction. The study setting was neonatal and pediatric intensive care units in a tertiary care hospital. Participants were staff in both units and parents of currently hospitalized children or infants. One 24-hour sound measurements and one 4-hour sound measurement linked to observed sound events were conducted in each area of the center's neonatal intensive care unit. Two of each measurement type were conducted in the pediatric intensive care unit. Focus groups were conducted with parents and staff. Transcripts were analyzed with descriptive content analysis and themes were compared against results from quantitative measurements. Sound levels exceeded recommended standards at nearly every time point. The most common code was related to talking. Themes from focus groups included the critical care context and sound levels, effects of sound levels, and reducing sound levels-the way forward. Results are consistent with work conducted in other critical care environments. Staff and families realize that high sound levels can be a problem, but feel that the culture and context are not supportive of a quiet care space. High levels of ambient sound suggest that the largest changes in sound levels are likely to come from design and equipment purchase decisions. L10 and Lmax appear to be the best outcomes for measurement of behavioral interventions.
47 CFR 73.597 - FM stereophonic sound broadcasting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false FM stereophonic sound broadcasting. 73.597... RADIO BROADCAST SERVICES Noncommercial Educational FM Broadcast Stations § 73.597 FM stereophonic sound..., transmit stereophonic sound programs upon installation of stereophonic sound transmitting equipment under...
47 CFR 73.597 - FM stereophonic sound broadcasting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false FM stereophonic sound broadcasting. 73.597... RADIO BROADCAST SERVICES Noncommercial Educational FM Broadcast Stations § 73.597 FM stereophonic sound..., transmit stereophonic sound programs upon installation of stereophonic sound transmitting equipment under...
47 CFR 73.597 - FM stereophonic sound broadcasting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false FM stereophonic sound broadcasting. 73.597... RADIO BROADCAST SERVICES Noncommercial Educational FM Broadcast Stations § 73.597 FM stereophonic sound..., transmit stereophonic sound programs upon installation of stereophonic sound transmitting equipment under...
47 CFR 73.597 - FM stereophonic sound broadcasting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false FM stereophonic sound broadcasting. 73.597... RADIO BROADCAST SERVICES Noncommercial Educational FM Broadcast Stations § 73.597 FM stereophonic sound..., transmit stereophonic sound programs upon installation of stereophonic sound transmitting equipment under...
Newborn infants detect cues of concurrent sound segregation.
Bendixen, Alexandra; Háden, Gábor P; Németh, Renáta; Farkas, Dávid; Török, Miklós; Winkler, István
2015-01-01
Separating concurrent sounds is fundamental for a veridical perception of one's auditory surroundings. Sound components that are harmonically related and start at the same time are usually grouped into a common perceptual object, whereas components that are not in harmonic relation or have different onset times are more likely to be perceived in terms of separate objects. Here we tested whether neonates are able to pick up the cues supporting this sound organization principle. We presented newborn infants with a series of complex tones with their harmonics in tune (creating the percept of a unitary sound object) and with manipulated variants, which gave the impression of two concurrently active sound sources. The manipulated variant had either one mistuned partial (single-cue condition) or the onset of this mistuned partial was also delayed (double-cue condition). Tuned and manipulated sounds were presented in random order with equal probabilities. Recording the neonates' electroencephalographic responses allowed us to evaluate their processing of the sounds. Results show that, in both conditions, mistuned sounds elicited a negative displacement of the event-related potential (ERP) relative to tuned sounds from 360 to 400 ms after sound onset. The mistuning-related ERP component resembles the object-related negativity (ORN) component in adults, which is associated with concurrent sound segregation. Delayed onset additionally led to a negative displacement from 160 to 200 ms, which was probably more related to the physical parameters of the sounds than to their perceptual segregation. The elicitation of an ORN-like response in newborn infants suggests that neonates possess the basic capabilities of segregating concurrent sounds by detecting inharmonic relations between the co-occurring sounds. © 2015 S. Karger AG, Basel.
Methods of sound simulation and applications in flight simulators
NASA Technical Reports Server (NTRS)
Gaertner, K. P.
1980-01-01
An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.
Sound absorption coefficient of coal bottom ash concrete for railway application
NASA Astrophysics Data System (ADS)
Ramzi Hannan, N. I. R.; Shahidan, S.; Maarof, Z.; Ali, N.; Abdullah, S. R.; Ibrahim, M. H. Wan
2017-11-01
A porous concrete able to reduce the sound wave that pass through it. When a sound waves strike a material, a portion of the sound energy was reflected back and another portion of the sound energy was absorbed by the material while the rest was transmitted. The larger portion of the sound wave being absorbed, the lower the noise level able to be lowered. This study is to investigate the sound absorption coefficient of coal bottom ash (CBA) concrete compared to the sound absorption coefficient of normal concrete by carried out the impedance tube test. Hence, this paper presents the result of the impedance tube test of the CBA concrete and normal concrete.
Controlling sound radiation through an opening with secondary loudspeakers along its boundaries.
Wang, Shuping; Tao, Jiancheng; Qiu, Xiaojun
2017-10-17
We propose a virtual sound barrier system that blocks sound transmission through openings without affecting access, light and air circulation. The proposed system applies active control technique to cancel sound transmission with a double layered loudspeaker array at the edge of the opening. Unlike traditional transparent glass windows, recently invented double-glazed ventilation windows and planar active sound barriers or any other metamaterials designed to reduce sound transmission, secondary loudspeakers are put only along the boundaries of the opening, which provides the possibility to make it invisible. Simulation and experimental results demonstrate its feasibility for broadband sound control, especially for low frequency sound which is usually hard to attenuate with existing methods.
Amplitude modulation of sound from wind turbines under various meteorological conditions.
Larsson, Conny; Öhlund, Olof
2014-01-01
Wind turbine (WT) sound annoys some people even though the sound levels are relatively low. This could be because of the amplitude modulated "swishing" characteristic of the turbine sound, which is not taken into account by standard procedures for measuring average sound levels. Studies of sound immission from WTs were conducted continually between 19 August 2011 and 19 August 2012 at two sites in Sweden. A method for quantifying the degree and strength of amplitude modulation (AM) is introduced here. The method reveals that AM at the immission points occur under specific meteorological conditions. For WT sound immission, the wind direction and sound speed gradient are crucial for the occurrence of AM. Interference between two or more WTs could probably enhance AM. The mechanisms by which WT sound is amplitude modulated are not fully understood.
Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform
NASA Astrophysics Data System (ADS)
Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo
2010-08-01
A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.
Relation of sound intensity and accuracy of localization.
Farrimond, T
1989-08-01
Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.
Event-related potential study to aversive auditory stimuli.
Czigler, István; Cox, Trevor J; Gyimesi, Kinga; Horváth, János
2007-06-15
In an auditory oddball task emotionally negative (aversive) sounds (e.g. rubbing together of polystyrene) and everyday sounds (e.g. ringing of a bicycle bell) were presented as task-irrelevant (novel) sounds. Both the aversive and the everyday sounds elicited the orientation-related P3a component of the event-related potentials (ERPs). In the 154-250 ms range the ERPs for the aversive sounds were more negative than the ERP of the everyday sounds. For the aversive sounds, this negativity was followed by a frontal positive wave (372-456 ms). The aversive sounds elicited larger late positive shift than the everyday sounds. The early negativity is considered as an initial effect in a broad neural network including limbic structures, while the later is related to the cognitive assessment of the stimuli and to memory-related processes.
47 CFR 73.297 - FM stereophonic sound broadcasting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false FM stereophonic sound broadcasting. 73.297... RADIO BROADCAST SERVICES FM Broadcast Stations § 73.297 FM stereophonic sound broadcasting. (a) An FM..., quadraphonic, etc.) sound programs upon installation of stereophonic sound transmitting equipment under the...
47 CFR 73.297 - FM stereophonic sound broadcasting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false FM stereophonic sound broadcasting. 73.297... RADIO BROADCAST SERVICES FM Broadcast Stations § 73.297 FM stereophonic sound broadcasting. (a) An FM..., quadraphonic, etc.) sound programs upon installation of stereophonic sound transmitting equipment under the...
47 CFR 73.297 - FM stereophonic sound broadcasting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false FM stereophonic sound broadcasting. 73.297... RADIO BROADCAST SERVICES FM Broadcast Stations § 73.297 FM stereophonic sound broadcasting. (a) An FM..., quadraphonic, etc.) sound programs upon installation of stereophonic sound transmitting equipment under the...
NASA Astrophysics Data System (ADS)
Hamilton, Mark F.
1990-12-01
This report discusses five projects all of which involve basic theoretical research in nonlinear acoustics: (1) pulsed finite amplitude sound beams are studied with a recently developed time domain computer algorithm that solves the KZK nonlinear parabolic wave equation; (2) nonlinear acoustic wave propagation in a liquid layer is a study of harmonic generation and acoustic soliton information in a liquid between a rigid and a free surface; (3) nonlinear effects in asymmetric cylindrical sound beams is a study of source asymmetries and scattering of sound by sound at high intensity; (4) effects of absorption on the interaction of sound beams is a completed study of the role of absorption in second harmonic generation and scattering of sound by sound; and (5) parametric receiving arrays is a completed study of parametric reception in a reverberant environment.
Scattering of sound by atmospheric turbulence predictions in a refractive shadow zone
NASA Technical Reports Server (NTRS)
Mcbride, Walton E.; Bass, Henry E.; Raspet, Richard; Gilbert, Kenneth E.
1990-01-01
According to ray theory, regions exist in an upward refracting atmosphere where no sound should be present. Experiments show, however, that appreciable sound levels penetrate these so-called shadow zones. Two mechanisms contribute to sound in the shadow zone: diffraction and turbulent scattering of sound. Diffractive effects can be pronounced at lower frequencies but are small at high frequencies. In the short wavelength limit, then, scattering due to turbulence should be the predominant mechanism involved in producing the sound levels measured in shadow zones. No existing analytical method includes turbulence effects in the prediction of sound pressure levels in upward refractive shadow zones. In order to obtain quantitative average sound pressure level predictions, a numerical simulation of the effect of atmospheric turbulence on sound propagation is performed. The simulation is based on scattering from randomly distributed scattering centers ('turbules'). Sound pressure levels are computed for many realizations of a turbulent atmosphere. Predictions from the numerical simulation are compared with existing theories and experimental data.
Descovich, K A; Reints Bok, T E; Lisle, A T; Phillips, C J C
2013-01-01
Behavioural lateralisation is evident across most animal taxa, although few marsupial and no fossorial species have been studied. Twelve wombats (Lasiorhinus latifrons) were bilaterally presented with eight sounds from different contexts (threat, neutral, food) to test for auditory laterality. Head turns were recorded prior to and immediately following sound presentation. Behaviour was recorded for 150 seconds after presentation. Although sound differentiation was evident by the amount of exploration, vigilance, and grooming performed after different sound types, this did not result in different patterns of head turn direction. Similarly, left-right proportions of head turns, walking events, and food approaches in the post-sound period were comparable across sound types. A comparison of head turns performed before and after sound showed a significant change in turn direction (χ(2) (1)=10.65, p=.001) from a left preference during the pre-sound period (mean 58% left head turns, CI 49-66%) to a right preference in the post-sound (mean 43% left head turns, CI 40-45%). This provides evidence of a right auditory bias in response to the presentation of the sound. This study therefore demonstrates that laterality is evident in southern hairy-nosed wombats in response to a sound stimulus, although side biases were not altered by sounds of varying context.
Park, H K; Bradley, J S
2009-09-01
Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.
Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.
Padilla-Ortiz, Ana L; Ibarra, David
2018-01-01
Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.
Assessing the potential for passive radio sounding of Europa and Ganymede with RIME and REASON
NASA Astrophysics Data System (ADS)
Schroeder, Dustin M.; Romero-Wolf, Andrew; Carrer, Leonardo; Grima, Cyril; Campbell, Bruce A.; Kofman, Wlodek; Bruzzone, Lorenzo; Blankenship, Donald D.
2016-12-01
Recent work has raised the potential for Jupiter's decametric radiation to be used as a source for passive radio sounding of its icy moons. Two radar sounding instruments, the Radar for Icy Moon Exploration (RIME) and the Radar for Europa Assessment and Sounding: Ocean to Near-surface (REASON) have been selected for ESA and NASA missions to Ganymede and Europa. Here, we revisit the projected performance of the passive sounding concept and assess the potential for its implementation as an additional mode for RIME and REASON. We find that the Signal to Noise Ratio (SNR) of passive sounding can approach or exceed that of active sounding in a noisy sub-Jovian environment, but that active sounding achieves a greater SNR in the presence of quiescent noise and outperforms passive sounding in terms of clutter. We also compare the performance of passive sounding at the 9 MHz HF center frequency of RIME and REASON to other frequencies within the Jovian decametric band. We conclude that the addition of a passive sounding mode on RIME or REASON stands to enhance their science return by enabling sub-Jovian HF sounding in the presence of decametric noise, but that there is not a compelling case for implementation at a different frequency.
The effect of spatial distribution on the annoyance caused by simultaneous sounds
NASA Astrophysics Data System (ADS)
Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas
2004-05-01
A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.
Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L
2018-01-01
Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.
Kocsis, Zsuzsanna; Winkler, István; Bendixen, Alexandra; Alain, Claude
2016-09-01
The auditory environment typically comprises several simultaneously active sound sources. In contrast to the perceptual segregation of two concurrent sounds, the perception of three simultaneous sound objects has not yet been studied systematically. We conducted two experiments in which participants were presented with complex sounds containing sound segregation cues (mistuning, onset asynchrony, differences in frequency or amplitude modulation or in sound location), which were set up to promote the perceptual organization of the tonal elements into one, two, or three concurrent sounds. In Experiment 1, listeners indicated whether they heard one, two, or three concurrent sounds. In Experiment 2, participants watched a silent subtitled movie while EEG was recorded to extract the object-related negativity (ORN) component of the event-related potential. Listeners predominantly reported hearing two sounds when the segregation promoting manipulations were applied to the same tonal element. When two different tonal elements received manipulations promoting them to be heard as separate auditory objects, participants reported hearing two and three concurrent sounds objects with equal probability. The ORN was elicited in most conditions; sounds that included the amplitude- or the frequency-modulation cue generated the smallest ORN amplitudes. Manipulating two different tonal elements yielded numerically and often significantly smaller ORNs than the sum of the ORNs elicited when the same cues were applied on a single tonal element. These results suggest that ORN reflects the presence of multiple concurrent sounds, but not their number. The ORN results are compatible with the horse-race principle of combining different cues of concurrent sound segregation. Copyright © 2016 Elsevier B.V. All rights reserved.
Sounds Exaggerate Visual Shape
ERIC Educational Resources Information Center
Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…
21 CFR 876.4590 - Interlocking urethral sound.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...
21 CFR 876.4590 - Interlocking urethral sound.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...
21 CFR 876.4590 - Interlocking urethral sound.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...
21 CFR 876.4590 - Interlocking urethral sound.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...
21 CFR 876.4590 - Interlocking urethral sound.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Interlocking urethral sound. 876.4590 Section 876...) MEDICAL DEVICES GASTROENTEROLOGY-UROLOGY DEVICES Surgical Devices § 876.4590 Interlocking urethral sound. (a) Identification. An interlocking urethral sound is a device that consists of two metal sounds...
NASA Astrophysics Data System (ADS)
Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi
2016-01-01
A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was extended. Additionally, we designed a sound insulator so as to realize a similar distribution of the particle velocity to that obtained using the optimized window function. Sound radiation was suppressed using a sound insulator put above the vibrating surface in the simulation using the three-dimensional finite element method. On the basis of this finding, it was suggested that near-field acoustic communication which suppressed sound radiation can be realized by applying the optimized window function to the particle velocity field.
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
46 CFR 298.14 - Economic soundness.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 8 2010-10-01 2010-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...
ERIC Educational Resources Information Center
Carrier, Sarah J.; Scott, Catherine Marie; Hall, Debra T.
2012-01-01
The science of sound helps students learn that sound is energy traveling in waves as vibrations transfer the energy through various media: solids, liquids, and gases. In addition to learning about the physical science of sound, students can learn about the sounds of different animal species: how sounds contribute to animals' survival, and how…
46 CFR 298.14 - Economic soundness.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 8 2012-10-01 2012-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...
46 CFR 298.14 - Economic soundness.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 8 2013-10-01 2013-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...
46 CFR 298.14 - Economic soundness.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 8 2014-10-01 2014-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...
46 CFR 298.14 - Economic soundness.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 8 2011-10-01 2011-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...
77 FR 50016 - Drawbridge Operation Regulation; Grassy Sound Channel, Middle Township, NJ
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... Operation Regulation; Grassy Sound Channel, Middle Township, NJ AGENCY: Coast Guard, DHS. ACTION: Notice of... operating schedule that governs the Grassy Sound Channel (Ocean Drive) Bridge across the Grassy Sound... operating schedule to accommodate ``The Wild Half'' run. The Grassy Sound Channel (Ocean Drive) Bridge...
A Comparison of Two Phonological Awareness Techniques between Samples of Preschool Children.
ERIC Educational Resources Information Center
Maslanka, Phyllis; Joseph, Laurice M.
2002-01-01
Examines the differential effects of sound boxes and sound sort phonological awareness instructional techniques on preschoolers' phonological awareness performance. Finds that children in the sound box group significantly outperformed children in the sound sort group on isolating medial sounds and segmenting phonemes. Reveals that preschool…
33 CFR 167.1700 - In Prince William Sound: General.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false In Prince William Sound: General... Schemes and Precautionary Areas Pacific West Coast § 167.1700 In Prince William Sound: General. The Prince William Sound Traffic Separation Scheme consists of four parts: Prince William Sound Traffic Separation...
42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 3 2013-10-01 2013-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...
33 CFR 167.1700 - In Prince William Sound: General.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false In Prince William Sound: General... Schemes and Precautionary Areas Pacific West Coast § 167.1700 In Prince William Sound: General. The Prince William Sound Traffic Separation Scheme consists of four parts: Prince William Sound Traffic Separation...
33 CFR 167.1700 - In Prince William Sound: General.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false In Prince William Sound: General... Schemes and Precautionary Areas Pacific West Coast § 167.1700 In Prince William Sound: General. The Prince William Sound Traffic Separation Scheme consists of four parts: Prince William Sound Traffic Separation...
42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 3 2012-10-01 2012-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...
42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 3 2014-10-01 2014-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...
Correlation between Identification Accuracy and Response Confidence for Common Environmental Sounds
set of environmental sounds with stimulus control and precision. The present study is one in a series of efforts to provide a baseline evaluation of a...sounds from six broad categories: household items, alarms, animals, human generated, mechanical, and vehicle sounds. Each sound was presented five times
33 CFR 167.1700 - In Prince William Sound: General.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false In Prince William Sound: General... Schemes and Precautionary Areas Pacific West Coast § 167.1700 In Prince William Sound: General. The Prince William Sound Traffic Separation Scheme consists of four parts: Prince William Sound Traffic Separation...
Method for chemically analyzing a solution by acoustic means
Beller, Laurence S.
1997-01-01
A method and apparatus for determining a type of solution and the concention of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration.
Application of acoustic radiosity methods to noise propagation within buildings
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2005-09-01
The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.
The effect of contextual sound cues on visual fidelity perception.
Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam
2014-01-01
Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.
[Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].
Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng
2008-12-01
In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.
First and second sound in a strongly interacting Fermi gas
NASA Astrophysics Data System (ADS)
Taylor, E.; Hu, H.; Liu, X.-J.; Pitaevskii, L. P.; Griffin, A.; Stringari, S.
2009-11-01
Using a variational approach, we solve the equations of two-fluid hydrodynamics for a uniform and trapped Fermi gas at unitarity. In the uniform case, we find that the first and second sound modes are remarkably similar to those in superfluid helium, a consequence of strong interactions. In the presence of harmonic trapping, first and second sound become degenerate at certain temperatures. At these points, second sound hybridizes with first sound and is strongly coupled with density fluctuations, giving a promising way of observing second sound. We also discuss the possibility of exciting second sound by generating local heat perturbations.
NASA sounding rockets, 1958 - 1968: A historical summary
NASA Technical Reports Server (NTRS)
Corliss, W. R.
1971-01-01
The development and use of sounding rockets is traced from the Wac Corporal through the present generation of rockets. The Goddard Space Flight Center Sounding Rocket Program is discussed, and the use of sounding rockets during the IGY and the 1960's is described. Advantages of sounding rockets are identified as their simplicity and payload simplicity, low costs, payload recoverability, geographic flexibility, and temporal flexibility. The disadvantages are restricted time of observation, localized coverage, and payload limitations. Descriptions of major sounding rockets, trends in vehicle usage, and a compendium of NASA sounding rocket firings are also included.
A novel method for pediatric heart sound segmentation without using the ECG.
Sepehri, Amir A; Gharehbaghi, Arash; Dutoit, Thierry; Kocharian, Armen; Kiani, A
2010-07-01
In this paper, we propose a novel method for pediatric heart sounds segmentation by paying special attention to the physiological effects of respiration on pediatric heart sounds. The segmentation is accomplished in three steps. First, the envelope of a heart sounds signal is obtained with emphasis on the first heart sound (S(1)) and the second heart sound (S(2)) by using short time spectral energy and autoregressive (AR) parameters of the signal. Then, the basic heart sounds are extracted taking into account the repetitive and spectral characteristics of S(1) and S(2) sounds by using a Multi-Layer Perceptron (MLP) neural network classifier. In the final step, by considering the diastolic and systolic intervals variations due to the effect of a child's respiration, a complete and precise heart sounds end-pointing and segmentation is achieved. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2003-10-01
Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.
By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants
Geangu, Elena; Quadrelli, Ermanno; Lewis, James W.; Macchi Cassia, Viola; Turati, Chiara
2015-01-01
Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011). Yet, little is known about the development of such specialization. Using event-related potentials (ERP), this study investigated neural correlates of 7-month-olds’ processing of human action (HA) sounds in comparison to human vocalizations (HV), environmental (ENV), and mechanical (MEC) sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV) led to significantly different response profiles compared to non-living sound sources (ENV + MEC) at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds. PMID:25732377
A Series of Case Studies of Tinnitus Suppression With Mixed Background Stimuli in a Cochlear Implant
Keiner, A. J.; Walker, Kurt; Deshpande, Aniruddha K.; Witt, Shelley; Killian, Matthijs; Ji, Helena; Patrick, Jim; Dillier, Norbert; van Dijk, Pim; Lai, Wai Kong; Hansen, Marlan R.; Gantz, Bruce
2015-01-01
Purpose Background sounds provided by a wearable sound playback device were mixed with the acoustical input picked up by a cochlear implant speech processor in an attempt to suppress tinnitus. Method First, patients were allowed to listen to several sounds and to select up to 4 sounds that they thought might be effective. These stimuli were programmed to loop continuously in the wearable playback device. Second, subjects were instructed to use 1 background sound each day on the wearable device, and they sequenced the selected background sounds during a 28-day trial. Patients were instructed to go to a website at the end of each day and rate the loudness and annoyance of the tinnitus as well as the acceptability of the background sound. Patients completed the Tinnitus Primary Function Questionnaire (Tyler, Stocking, Secor, & Slattery, 2014) at the beginning of the trial. Results Results indicated that background sounds were very effective at suppressing tinnitus. There was considerable variability in sounds preferred by the subjects. Conclusion The study shows that a background sound mixed with the microphone input can be effective for suppressing tinnitus during daily use of the sound processor in selected cochlear implant users. PMID:26001407
Transfer of knowledge from sound quality measurement to noise impact evaluation
NASA Astrophysics Data System (ADS)
Genuit, Klaus
2004-05-01
It is well known that the measurement and analysis of sound quality requires a complex procedure with consideration of the physical, psychoacoustical and psychological aspects of sound. Sound quality cannot be described only by a simple value based on A-weighted sound pressure level measurements. The A-weighted sound pressure level is sufficient to predict the probabilty that the human ear could be damaged by sound but the A-weighted level is not the correct descriptor for the annoyance of a complex sound situation given by several different sound events at different and especially moving positions (soundscape). On the one side, the consideration of the spectral distribution and the temporal pattern (psychoacoustics) is requested and, on the other side, the subjective attitude with respect to the sound situation, the expectation and experience of the people (psychology) have to be included in context with the complete noise impact evaluation. This paper describes applications of the newest methods of sound quality measurements-as it is well introduced at the car manufacturers-based on artifical head recordings and signal processing comparable to the human hearing used in noisy environments like community/traffic noise.
The Anna's hummingbird chirps with its tail: a new mechanism of sonation in birds
Clark, Christopher James; Feo, Teresa J
2008-01-01
A diverse array of birds apparently make mechanical sounds (called sonations) with their feathers. Few studies have established that these sounds are non-vocal, and the mechanics of how these sounds are produced remains poorly studied. The loud, high-frequency chirp emitted by a male Anna's hummingbird (Calypte anna) during his display dive is a debated example. Production of the sound was originally attributed to the tail, but a more recent study argued that the sound is vocal. Here, we use high-speed video of diving birds, experimental manipulations on wild birds and laboratory experiments on individual feathers to show that the dive sound is made by tail feathers. High-speed video shows that fluttering of the trailing vane of the outermost tail feathers produces the sound. The mechanism is not a whistle, and we propose a flag model to explain the feather's fluttering and accompanying sound. The flag hypothesis predicts that subtle changes in feather shape will tune the frequency of sound produced by feathers. Many kinds of birds are reported to create aerodynamic sounds with their wings or tail, and this model may explain a wide diversity of non-vocal sounds produced by birds. PMID:18230592
The hearing threshold of a harbor porpoise (Phocoena phocoena) for impulsive sounds (L).
Kastelein, Ronald A; Gransier, Robin; Hoek, Lean; de Jong, Christ A F
2012-08-01
The distance at which harbor porpoises can hear underwater detonation sounds is unknown, but depends, among other factors, on the hearing threshold of the species for impulsive sounds. Therefore, the underwater hearing threshold of a young harbor porpoise for an impulsive sound, designed to mimic a detonation pulse, was quantified by using a psychophysical technique. The synthetic exponential pulse with a 5 ms time constant was produced and transmitted by an underwater projector in a pool. The resulting underwater sound, though modified by the response of the projection system and by the pool, exhibited the characteristic features of detonation sounds: A zero to peak sound pressure level of at least 30 dB (re 1 s(-1)) higher than the sound exposure level, and a short duration (34 ms). The animal's 50% detection threshold for this impulsive sound occurred at a received unweighted broadband sound exposure level of 60 dB re 1 μPa(2)s. It is shown that the porpoise's audiogram for short-duration tonal signals [Kastelein et al., J. Acoust. Soc. Am. 128, 3211-3222 (2010)] can be used to estimate its hearing threshold for impulsive sounds.
Ishii, Youhei; Morita, Kiichiro; Shouji, Yoshihisa; Nakashima, Youko; Uchimura, Naohisa
2010-02-01
Emotion-associated sounds have been suggested to exert important effects upon human personal relationships. The present study was aimed to characterize the effects of the sounds of crying or laughing on visual cognitive function in schizophrenia patients. We recorded exploratory eye movements in 24 schizophrenia patients (mean age, 27.0 +/- 6.1 years; 14 male, 10 female) and age-matched controls. The total eye scanning length (TESL) and total number of gaze points in the left (left TNGP) and right (right TNGP) visual fields of the screen and the number of researching areas (NRA) were determined using eye-mark recording in the presence/absence of emotionally charged sounds. Controls' TESL for smiling pictures was longer than that for crying pictures irrespective of sounds. Patients' TESL for smiling pictures, however, was shorter than for crying pictures irrespective of the sounds. The left TNGP for smiling pictures was lower in patients than controls independent of sound. Importantly, the right TNGP was significantly larger with laughing sounds than in the absence of sound. In controls, the NRA for smiling pictures was significantly greater than for crying pictures irrespective of sound. Patient NRA did not significantly differ between smiling and crying pictures irrespective of sound. Eye movements in schizophrenia patients' left field for smiling pictures associated with laughing sounds particularly differed from those in controls, suggesting impaired visual cognitive function associated with positive emotion, also involving pleasure-related sounds, in schizophrenia.
Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V
2006-02-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.
Humpback whale bioacoustics: From form to function
NASA Astrophysics Data System (ADS)
Mercado, Eduardo, III
This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.
Ponnath, Abhilash; Farris, Hamilton E.
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3–10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene. PMID:25120437
Ponnath, Abhilash; Farris, Hamilton E
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.
The Coast Artillery Journal. Volume 65, Number 4, October 1926
1926-10-01
sound. a. Sound location of airplanes by binaural observation in all antiaircraft regiments. b. Sound ranging on report of enemy guns, together with...Direction finding by binaural observation. [Subparagraphs 30 a and 30 c (l).J This applies to continuous sounds such as pro- pellor noises. b. Point...impacts. 32. The so-called binaural sense is our means of sensing the direc- tion of a sound source. When we hear a sound we judge the approxi- mate
How do "mute" cicadas produce their calling songs?
Luo, Changqing; Wei, Cong; Nansen, Christian
2015-01-01
Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as "mute". This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the "mute" cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum) are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism.
Lemaitre, Guillaume; Heller, Laurie M.; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas
2015-01-01
We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge. PMID:26544884
Object localization using a biosonar beam: how opening your mouth improves localization.
Arditi, G; Weiss, A J; Yovel, Y
2015-08-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.
Object localization using a biosonar beam: how opening your mouth improves localization
Arditi, G.; Weiss, A. J.; Yovel, Y.
2015-01-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552
How Do Honeybees Attract Nestmates Using Waggle Dances in Dark and Noisy Hives?
Hasegawa, Yuji; Ikeno, Hidetoshi
2011-01-01
It is well known that honeybees share information related to food sources with nestmates using a dance language that is representative of symbolic communication among non-primates. Some honeybee species engage in visually apparent behavior, walking in a figure-eight pattern inside their dark hives. It has been suggested that sounds play an important role in this dance language, even though a variety of wing vibration sounds are produced by honeybee behaviors in hives. It has been shown that dances emit sounds primarily at about 250–300 Hz, which is in the same frequency range as honeybees' flight sounds. Thus the exact mechanism whereby honeybees attract nestmates using waggle dances in such a dark and noisy hive is as yet unclear. In this study, we used a flight simulator in which honeybees were attached to a torque meter in order to analyze the component of bees' orienting response caused only by sounds, and not by odor or by vibrations sensed by their legs. We showed using single sound localization that honeybees preferred sounds around 265 Hz. Furthermore, according to sound discrimination tests using sounds of the same frequency, honeybees preferred rhythmic sounds. Our results demonstrate that frequency and rhythmic components play a complementary role in localizing dance sounds. Dance sounds were presumably developed to share information in a dark and noisy environment. PMID:21603608
Kuroda, Tsuyoshi; Tomimatsu, Erika; Grondin, Simon; Miyazaki, Makoto
2016-11-01
We investigated how perceived duration of empty time intervals would be modulated by the length of sounds marking those intervals. Three sounds were successively presented in Experiment 1. Each sound was short (S) or long (L), and the temporal position of the middle sound's onset was varied. The lengthening of each sound resulted in delayed perception of the onset; thus, the middle sound's onset had to be presented earlier in the SLS than in the LSL sequence so that participants perceived the three sounds as presented at equal interonset intervals. In Experiment 2, a short sound and a long sound were alternated repeatedly, and the relative duration of the SL interval to the LS interval was varied. This repeated sequence was perceived as consisting of equal interonset intervals when the onsets of all sounds were aligned at physically equal intervals. If the same onset delay as in the preceding experiment had occurred, participants should have perceived equality between the interonset intervals in the repeated sequence when the SL interval was physically shortened relative to the LS interval. The effects of sound length seemed to be canceled out when the presentation of intervals was repeated. Finally, the perceived duration of the interonset intervals in the repeated sequence was not influenced by whether the participant's native language was French or Japanese, or by how the repeated sequence was perceptually segmented into rhythmic groups.
Sound production in Onuxodon fowleri (Carapidae) and its amplification by the host shell.
Kéver, Loïc; Colleye, Orphal; Lugli, Marco; Lecchini, David; Lerouvreur, Franck; Herrel, Anthony; Parmentier, Eric
2014-12-15
Onuxodon species are well known for living inside pearl oysters. As in other carapids, their anatomy highlights their ability to make sounds but sound production has never been documented in Onuxodon. This paper describes sound production in Onuxodon fowleri as well as the anatomy of the sound production apparatus. Single-pulsed sounds and multiple-pulsed sounds that sometimes last more than 3 s were recorded in the field and in captivity (Makemo Island, French Polynesia). These pulses are characterized by a broadband frequency spectrum from 100 to 1000 Hz. Onuxodon fowleri is mainly characterized by its ability to modulate the pulse period, meaning that this species can produce pulsed sounds and tonal-like sounds using the same mechanism. In addition, the sound can be remarkably amplified by the shell cavity (peak gain can exceed 10 dB for some frequencies). The sonic apparatus of O. fowleri is characterized by a rocker bone in front of the swimbladder, modified vertebrae and epineurals, and two pairs of sonic muscles, one of which (primary sonic muscle) inserts on the rocker bone. The latter structure, which is absent in other carapid genera, appears to be sexually dimorphic suggesting differences in sound production in males and females. Sound production in O. fowleri could be an example of adaptation where an animal exploits features of its environment to enhance communication. © 2014. Published by The Company of Biologists Ltd.
Endo, Hiroshi; Ino, Shuichi; Fujisaki, Waka
2017-09-01
Because chewing sounds influence perceived food textures, unpleasant textures of texture-modified diets might be improved by chewing sound modulation. Additionally, since inhomogeneous food properties increase perceived sensory intensity, the effects of chewing sound modulation might depend on inhomogeneity. This study examined the influences of texture inhomogeneity on the effects of chewing sound modulation. Three kinds of nursing care foods in two food process types (minced-/puréed-like foods for inhomogeneous/homogeneous texture respectively) were used as sample foods. A pseudo-chewing sound presentation system, using electromyogram signals, was used to modulate chewing sounds. Thirty healthy elderly participants participated in the experiment. In two conditions with and without the pseudo-chewing sound, participants rated the taste, texture, and evoked feelings in response to sample foods. The results showed that inhomogeneity strongly influenced the perception of food texture. Regarding the effects of the pseudo-chewing sound, taste was less influenced, the perceived food texture tended to change in the minced-like foods, and evoked feelings changed in both food process types. Though there were some food-dependent differences in the effects of the pseudo-chewing sound, the presentation of the pseudo-chewing sounds was more effective in foods with an inhomogeneous texture. In addition, it was shown that the pseudo-chewing sound might have positively influenced feelings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Experiments to Investigate the Acoustic Properties of Sound Propagation
ERIC Educational Resources Information Center
Dagdeviren, Omur E.
2018-01-01
Propagation of sound waves is one of the fundamental concepts in physics. Some of the properties of sound propagation such as attenuation of sound intensity with increasing distance are familiar to everybody from the experiences of daily life. However, the frequency dependence of sound propagation and the effect of acoustics in confined…
Code of Federal Regulations, 2011 CFR
2011-10-01
... second or less. Decibel (dB) means a unit of measurement of sound pressure levels. dB(A) means the sound... operate similar equipment under similar conditions. Sound level or Sound pressure level means ten times... an eight-hour time-weighted-average sound level (TWA) of 85 dB(A), or, equivalently, a dose of 50...
ERIC Educational Resources Information Center
Deal, Walter F., III
2007-01-01
Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…
Cognitive Control of Involuntary Distraction by Deviant Sounds
ERIC Educational Resources Information Center
Parmentier, Fabrice B. R.; Hebrero, Maria
2013-01-01
It is well established that a task-irrelevant sound (deviant sound) departing from an otherwise repetitive sequence of sounds (standard sounds) elicits an involuntary capture of attention and orienting response toward the deviant stimulus, resulting in the lengthening of response times in an ongoing task. Some have argued that this type of…
Technology, Sound and Popular Music.
ERIC Educational Resources Information Center
Jones, Steve
The ability to record sound is power over sound. Musicians, producers, recording engineers, and the popular music audience often refer to the sound of a recording as something distinct from the music it contains. Popular music is primarily mediated via electronics, via sound, and not by means of written notes. The ability to preserve or modify…
Cleaning up Trumpet Sound: Some Paths to Better Tone
ERIC Educational Resources Information Center
Zingara, James J.
2004-01-01
Of all the factors used to assess trumpet players, the one that distinguishes the established professional from the student is sound quality. While a good sound may be called "full," "rich," or "dark," poor sound is often described as "constricted," "tight," "thin," or "fuzzy." Although students' concept of good sound is important, many times…
Confusability of Consonant Phonemes in Sound Discrimination Tasks.
ERIC Educational Resources Information Center
Rudegeair, Robert E.
The findings of Marsh and Sherman's investigation, in 1970, of the speech sound discrimination ability of kindergarten subjects, are discussed in this paper. In the study a comparison was made between performance when speech sounds were presented in isolation and when speech sounds were presented in a word context, using minimal sound contrasts.…
ERIC Educational Resources Information Center
Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan
2014-01-01
While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…
50 CFR 27.71 - Motion or sound pictures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...
42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as demonstrated...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signal tests. 67.10-20... NAVIGATION AIDS TO NAVIGATION ON ARTIFICIAL ISLANDS AND FIXED STRUCTURES General Requirements for Sound signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the...
78 FR 48314 - Drawbridge Operation Regulation; Grassy Sound Channel, Middle Township, NJ
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-08
... Operation Regulation; Grassy Sound Channel, Middle Township, NJ AGENCY: Coast Guard, DHS. ACTION: Notice of... operating schedule that governs the Grassy Sound Channel Bridge (Ocean Drive) across Grassy Sound, mile 1.0..., the Grassy Sound Channel Bridge (Ocean Drive), at mile 1.0, at Middle Township, NJ is open on signal...
1987-09-01
stethoscope . Auscultation in respiratory medicine, however, has advanced slowly since Laennec established auscultation of lung sounds as a means of... stethoscope and the ear have limitations in their use as instruments for the evaluation of respiratory sounds. Respiratory sounds have a wide spectrum...Amplification, Rectification and Level-shft ing Stage ... 11 Respiratory Sound Circuitry ....... .................. ... 11 Sound Sensor
33 CFR 167.1323 - In Puget Sound and its approaches: Puget Sound.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false In Puget Sound and its approaches: Puget Sound. 167.1323 Section 167.1323 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1323 In Puget Sound and its...
50 CFR 27.71 - Motion or sound pictures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 9 2012-10-01 2012-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...
33 CFR 167.1323 - In Puget Sound and its approaches: Puget Sound.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false In Puget Sound and its approaches: Puget Sound. 167.1323 Section 167.1323 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1323 In Puget Sound and its...
24 CFR 51.103 - Criteria and standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... decibels to sound levels in the night from 10 p.m. to 7 a.m. Mathematical expressions for average sound..., as indicated in § 51.106(a)(3). Methods for assessing the contribution of loud impulsive sounds to day-night average sound level at a site and mathematical expressions for determining whether a sound...
50 CFR 27.71 - Motion or sound pictures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...
42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as demonstrated...
The impact of sound in modern multiline video slot machine play.
Dixon, Mike J; Harrigan, Kevin A; Santesso, Diane L; Graydon, Candice; Fugelsang, Jonathan A; Collins, Karen
2014-12-01
Slot machine wins and losses have distinctive, measurable, physiological effects on players. The contributing factors to these effects remain under-explored. We believe that sound is one of these key contributing factors. Sound plays an important role in reinforcement, and thus on arousal level and stress response of players. It is the use of sound for positive reinforcement in particular that we believe influences the player. In the current study, we investigate the role that sound plays in psychophysical responses to slot machine play. A total of 96 gamblers played a slot machine simulator with and without sound being paired with reinforcement. Skin conductance responses and heart rate, as well as subjective judgments about the gambling experience were examined. The results showed that the sound influenced the arousal of participants both psychophysically and psychologically. The sound also influenced players' preferences, with the majority of players preferring to play slot machines that were accompanied by winning sounds. The sounds also caused players to significantly overestimate the number of times they won while playing the slot machine.
Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing
2015-01-01
This intervention study investigated the growth of letter sound reading and growth of consonant–vowel–consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching pre-school children to decode, or read, single letters. The study compared a control group, which received the preschool’s standard letter-sound instruction, to an intervention group which received a 3-step letter-sound instruction intervention. The children’s growth in letter-sound reading and CVC word decoding abilities were assessed at baseline and 2, 4, 6 and 8 weeks. When compared to the control group, the growth of letter-sound reading ability was slightly higher for the intervention group. The rate of increase in letter-sound reading was significantly faster for the intervention group. In both groups, too few children learned to decode any CVC words to allow for analysis. Results of this study support the use of the intervention strategy in preschools for teaching children print-to-sound processing. PMID:26839494
Lee, Hyun-Ho; Lee, Sang-Kwon
2009-09-01
Booming sound is one of the important sounds in a passenger car. The aim of the paper is to develop the objective evaluation method of interior booming sound. The development method is based on the sound metrics and ANN (artificial neural network). The developed method is called the booming index. Previous work maintained that booming sound quality is related to loudness and sharpness--the sound metrics used in psychoacoustics--and that the booming index is developed by using the loudness and sharpness for a signal within whole frequency between 20 Hz and 20 kHz. In the present paper, the booming sound quality was found to be effectively related to the loudness at frequencies below 200 Hz; thus the booming index is updated by using the loudness of the signal filtered by the low pass filter at frequency under 200 Hz. The relationship between the booming index and sound metric is identified by an ANN. The updated booming index has been successfully applied to the objective evaluation of the booming sound quality of mass-produced passenger cars.
Sounds produced by individual white whales, Delphinapterus leucas, from Svalbard during capture (L)
NASA Astrophysics Data System (ADS)
van Parijs, Sofie M.; Lydersen, Christian; Kovacs, Kit M.
2003-01-01
Recordings were made of the sounds produced by white whales during capture events in Storfjorden, Svalbard, in the late autumn. Only four of eight captured individuals produced sounds. Four subadults, one female and three males, between 330 and 375 cm long, did not produce sounds during handling. The four animals that produced sounds were as follows: a female subadult of 280 cm produced repetitive broadband clicks; a solitary calf produced harmonic sounds, which we suggest may serve as mother-calf ``contact calls,'' and a mother-calf pair were the two animals that produced the most sounds in the study. The mother produced ``crooning'' broadband clicks and frequently moved her head toward her calf while producing underwater sounds. The calf produced three types of frequency-modulated sounds interspersed within broadband click trains. No sounds were heard from any of the animals once they were free-swimming, or during ad lib recording sessions in the study area, even though groups of white whales were sighted on several occasions away from the capture net.
A visual stethoscope to detect the position of the tracheal tube.
Kato, Hiromi; Suzuki, Akira; Nakajima, Yoshiki; Makino, Hiroshi; Sanjo, Yoshimitsu; Nakai, Takayoshi; Shiraishi, Yoshito; Katoh, Takasumi; Sato, Shigehito
2009-12-01
Advancing a tracheal tube into the bronchus produces unilateral breath sounds. We created a Visual Stethoscope that allows real-time fast Fourier transformation of the sound signal and 3-dimensional (frequency-amplitude-time) color rendering of the results on a personal computer with simultaneous processing of 2 individual sound signals. The aim of this study was to evaluate whether the Visual Stethoscope can detect bronchial intubation in comparison with auscultation. After induction of general anesthesia, the trachea was intubated with a tracheal tube. The distance from the incisors to the carina was measured using a fiberoptic bronchoscope. While the anesthesiologist advanced the tracheal tube from the trachea to the bronchus, another anesthesiologist auscultated breath sounds to detect changes of the breath sounds and/or disappearance of bilateral breath sounds for every 1 cm that the tracheal tube was advanced. Two precordial stethoscopes placed at the left and right sides of the chest were used to record breath sounds simultaneously. Subsequently, at a later date, we randomly entered the recorded breath sounds into the Visual Stethoscope. The same anesthesiologist observed the visualized breath sounds on the personal computer screen processed by the Visual Stethoscope to examine changes of breath sounds and/or disappearance of bilateral breath sound. We compared the decision made based on auscultation with that made based on the results of the visualized breath sounds using the Visual Stethoscope. Thirty patients were enrolled in the study. When irregular breath sounds were auscultated, the tip of the tracheal tube was located at 0.6 +/- 1.2 cm on the bronchial side of the carina. Using the Visual Stethoscope, when there were any changes of the shape of the visualized breath sound, the tube was located at 0.4 +/- 0.8 cm on the tracheal side of the carina (P < 0.01). When unilateral breath sounds were auscultated, the tube was located at 2.6 +/- 1.2 cm on the bronchial side of the carina. The tube was also located at 2.3 +/- 1.0 cm on the bronchial side of the carina when a unilateral shape of visualized breath sounds was obtained using the Visual Stethoscope (not significant). During advancement of the tracheal tube, alterations of the shape of the visualized breath sounds using the Visual Stethoscope appeared before the changes of the breath sounds were detected by auscultation. Bilateral breath sounds disappeared when the tip of the tracheal tube was advanced beyond the carina in both groups.
The lung sounds are best heard with a stethoscope. This is called auscultation. Normal lung sounds occur ... the bottom of the rib cage. Using a stethoscope, the doctor may hear normal breathing sounds, decreased ...
Experiments to investigate the acoustic properties of sound propagation
NASA Astrophysics Data System (ADS)
Dagdeviren, Omur E.
2018-07-01
Propagation of sound waves is one of the fundamental concepts in physics. Some of the properties of sound propagation such as attenuation of sound intensity with increasing distance are familiar to everybody from the experiences of daily life. However, the frequency dependence of sound propagation and the effect of acoustics in confined environments are not straightforward to estimate. In this article, we propose experiments, which can be conducted in a classroom environment with commonly available devices such as smartphones and laptops to measure sound intensity level as a function of the distance between the source and the observer and frequency of the sound. Our experiments and deviations from the theoretical calculations can be used to explain basic concepts of sound propagation and acoustics to a diverse population of students.
Infra-sound cancellation and mitigation in wind turbines
NASA Astrophysics Data System (ADS)
Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim
2018-03-01
The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.
Potential sound production by a deep-sea fish
NASA Astrophysics Data System (ADS)
Mann, David A.; Jarvis, Susan M.
2004-05-01
Swimbladder sonic muscles of deep-sea fishes were described over 35 years ago. Until now, no recordings of probable deep-sea fish sounds have been published. A sound likely produced by a deep-sea fish has been isolated and localized from an analysis of acoustic recordings made at the AUTEC test range in the Tongue of the Ocean, Bahamas, from four deep-sea hydrophones. This sound is typical of a fish sound in that it is pulsed and relatively low frequency (800-1000 Hz). Using time-of-arrival differences, the sound was localized to 548-696-m depth, where the bottom was 1620 m. The ability to localize this sound in real-time on the hydrophone range provides a great advantage for being able to identify the sound-producer using a remotely operated vehicle.
The Advanced Technology Microwave Sounder (ATMS): A New Operational Sensor Series
NASA Technical Reports Server (NTRS)
Kim, Edward; Lyu, Cheng-H Joseph; Leslie, R. Vince; Baker, Neal; Mo, Tsan; Sun, Ninghai; Bi, Li; Anderson, Mike; Landrum, Mike; DeAmici, Giovanni;
2012-01-01
ATMS is a new satellite microwave sounding sensor designed to provide operational weather agencies with atmospheric temperature and moisture profile information for global weather forecasting and climate applications. ATMS will continue the microwave sounding capabilities first provided by its predecessors, the Microwave Sounding Unit (MSU) and Advanced Microwave Sounding Unit (AMSU). The first ATMS was launched October 28, 2011 on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. Microwave soundings by themselves are the highest-impact input data used by Numerical Weather Prediction (NWP) models; and ATMS, when combined with the Cross-track Infrared Sounder (CrIS), forms the Cross-track Infrared and Microwave Sounding Suite (CrIMSS). The microwave soundings help meet NWP sounding requirements under cloudy sky conditions and provide key profile information near the surface
Recognition of Modified Conditioning Sounds by Competitively Trained Guinea Pigs
Ojima, Hisayuki; Horikawa, Junsei
2016-01-01
The guinea pig (GP) is an often-used species in hearing research. However, behavioral studies are rare, especially in the context of sound recognition, because of difficulties in training these animals. We examined sound recognition in a social competitive setting in order to examine whether this setting could be used as an easy model. Two starved GPs were placed in the same training arena and compelled to compete for food after hearing a conditioning sound (CS), which was a repeat of almost identical sound segments. Through a 2-week intensive training, animals were trained to demonstrate a set of distinct behaviors solely to the CS. Then, each of them was subjected to generalization tests for recognition of sounds that had been modified from the CS in spectral, fine temporal and tempo (i.e., intersegment interval, ISI) dimensions. Results showed that they discriminated between the CS and band-rejected test sounds but had no preference for a particular frequency range for the recognition. In contrast, sounds modified in the fine temporal domain were largely perceived to be in the same category as the CS, except for the test sound generated by fully reversing the CS in time. Animals also discriminated sounds played at different tempos. Test sounds with ISIs shorter than that of the multi-segment CS were discriminated from the CS, while test sounds with ISIs longer than that of the CS segments were not. For the shorter ISIs, most animals initiated apparently positive food-access behavior as they did in response to the CS, but discontinued it during the sound-on period probably because of later recognition of tempo. Interestingly, the population range and mean of the delay time before animals initiated the food-access behavior were very similar among different ISI test sounds. This study, for the first time, demonstrates a wide aspect of sound discrimination abilities of the GP and will provide a way to examine tempo perception mechanisms using this animal species. PMID:26858617
Perceptual sensitivity to spectral properties of earlier sounds during speech categorization.
Stilp, Christian E; Assgari, Ashley A
2018-02-28
Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F 1 frequency regions, listeners report more high-F 1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization. This obscures the lower limit of perceptual sensitivity to spectral properties of earlier sounds, i.e., when SCEs begin to bias speech categorization. Listeners categorized vowels (/ɪ/-/ɛ/, Experiment 1) or consonants (/d/-/g/, Experiment 2) following a context sentence with little spectral amplification (+1 to +4 dB) in frequency regions known to produce SCEs. In both experiments, +3 and +4 dB amplification in key frequency regions of the context produced SCEs, but lesser amplification was insufficient to bias performance. This establishes a lower limit of perceptual sensitivity where spectral differences across sounds can bias subsequent speech categorization. These results are consistent with proposed adaptation-based mechanisms that potentially underlie SCEs in auditory perception. Recent sounds can change what speech sounds we hear later. This can occur when the average frequency composition of earlier sounds differs from that of later sounds, biasing how they are perceived. These "spectral contrast effects" are widely observed when sounds' frequency compositions differ substantially. We reveal the lower limit of these effects, as +3 dB amplification of key frequency regions in earlier sounds was enough to bias categorization of the following vowel or consonant sound. Speech categorization being biased by very small spectral differences across sounds suggests that spectral contrast effects occur frequently in everyday speech perception.
Method for chemically analyzing a solution by acoustic means
Beller, L.S.
1997-04-22
A method and apparatus are disclosed for determining a type of solution and the concentration of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration. 10 figs.
Discovery of Sound in the Sea: Resources for Educators, Students, the Public, and Policymakers.
Vigness-Raposa, Kathleen J; Scowcroft, Gail; Miller, James H; Ketten, Darlene R; Popper, Arthur N
2016-01-01
There is increasing concern about the effects of underwater sound on marine life. However, the science of sound is challenging. The Discovery of Sound in the Sea (DOSITS) Web site ( http://www.dosits.org ) was designed to provide comprehensive scientific information on underwater sound for the public and educational and media professionals. It covers the physical science of underwater sound and its use by people and marine animals for a range of tasks. Celebrating 10 years of online resources, DOSITS continues to develop new material and improvements, providing the best resource for the most up-to-date information on underwater sound and its potential effects.
Nordahl, Rolf; Turchet, Luca; Serafin, Stefania
2011-09-01
We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.
Direct-current vertical electrical-resistivity soundings in the Lower Peninsula of Michigan
Westjohn, D.B.; Carter, P.J.
1989-01-01
Ninety-three direct-current vertical electrical-resistivity soundings were conducted in the Lower Peninsula of Michigan from June through October 1987. These soundings were made to assist in mapping the depth to brine in areas where borehole resistivity logs and water-quality data are sparse or lacking. The Schlumberger array for placement of current and potential electrodes was used for each sounding. Vertical electrical-resistivity sounding field data, shifted and smoothed sounding data, and electric layers calculated using inverse modeling techniques are presented. Also included is a summary of the near-surface conditions and depths to conductors and resistors for each sounding location.
New non-invasive automatic cough counting program based on 6 types of classified cough sounds.
Murata, Akira; Ohota, Nao; Shibuya, Atsuo; Ono, Hiroshi; Kudoh, Shoji
2006-01-01
Cough consisting of an initial deep inspiration, glottal closure, and an explosive expiration accompanied by a sound is one of the most common symptoms of respiratory disease. Despite its clinical importance, standard methods for objective cough analysis have yet to be established. We investigated the characteristics of cough sounds acoustically, designed a program to discriminate cough sounds from other sounds, and finally developed a new objective method of non-invasive cough counting. In addition, we evaluated the clinical efficacy of that program. We recorded cough sounds using a memory stick IC recorder in free-field from 2 patients and analyzed the intensity of 534 recorded coughs acoustically according to time domain. First we squared the sound waveform of recorded cough sounds, which was then smoothed out over a 20 ms window. The 5 parameters and some definitions to discriminate the cough sounds from other noise were identified and the cough sounds were classified into 6 groups. Next, we applied this method to develop a new automatic cough count program. Finally, to evaluate the accuracy and clinical usefulness of this program, we counted cough sounds collected from another 10 patients using our program and conventional manual counting. And the sensitivity, specificity and discriminative rate of the program were analyzed. This program successfully discriminated recorded cough sounds out of 1902 sound events collected from 10 patients at a rate of 93.1%. The sensitivity was 90.2% and the specificity was 96.5%. Our new cough counting program can be sufficiently useful for clinical studies.
Jeanson, Lena; Wiegrebe, Lutz; Gürkov, Robert; Krause, Eike; Drexl, Markus
2017-02-01
The presentation of intense, low-frequency (LF) sound to the human ear can cause very slow, sinusoidal oscillations of cochlear sensitivity after LF sound offset, coined the "Bounce" phenomenon. Changes in level and frequency of spontaneous otoacoustic emissions (SOAEs) are a sensitive measure of the Bounce. Here, we investigated the effect of LF sound level and frequency on the Bounce. Specifically, the level of SOAEs was tracked for minutes before and after a 90-s LF sound exposure. Trials were carried out with several LF sound levels (93 to 108 dB SPL corresponding to 47 to 75 phons at a fixed frequency of 30 Hz) and different LF sound frequencies (30, 60, 120, 240 and 480 Hz at a fixed loudness level of 80 phons). At an LF sound frequency of 30 Hz, a minimal sound level of 102 dB SPL (64 phons) was sufficient to elicit a significant Bounce. In some subjects, however, 93 dB SPL (47 phons), the lowest level used, was sufficient to elicit the Bounce phenomenon and actual thresholds could have been even lower. Measurements with different LF sound frequencies showed a mild reduction of the Bounce phenomenon with increasing LF sound frequency. This indicates that the strength of the Bounce not only is a simple function of the spectral separation between SOAE and LF sound frequency but also depends on absolute LF sound frequency, possibly related to the magnitude of the AC component of the outer hair cell receptor potential.
Noise in a Laboratory Animal Facility from the Human and Mouse Perspectives
Reynolds, Randall P; Kinard, Will L; Degraff, Jesse J; Leverage, Ned; Norton, John N
2010-01-01
The current study was performed to understand the level of sound produced by ventilated racks, animal transfer stations, and construction equipment that mice in ventilated cages hear relative to what humans would hear in the same environment. Although the ventilated rack and animal transfer station both produced sound pressure levels above the ambient level within the human hearing range, the sound pressure levels within the mouse hearing range did not increase above ambient noise from either noise source. When various types of construction equipment were used 3 ft from the ventilated rack, the sound pressure level within the mouse hearing range was increased but to a lesser degree for each implement than were the sound pressure levels within the human hearing range. At more distant locations within the animal facility, sound pressure levels from the large jackhammer within the mouse hearing range decreased much more rapidly than did those in the human hearing range, indicating that less of the sound is perceived by mice than by humans. The relatively high proportion of low-frequency sound produced by the shot blaster, used without the metal shot that it normally uses to clean concrete, increased the sound pressure level above the ambient level for humans but did not increase sound pressure levels above ambient noise for mice at locations greater than 3 ft from inside of the cage, where sound was measured. This study demonstrates that sound clearly audible to humans in the animal facility may be perceived to a lesser degree or not at all by mice, because of the frequency content of the sound. PMID:20858361
A method for evaluating the relation between sound source segregation and masking
Lutfi, Robert A.; Liu, Ching-Ju
2011-01-01
Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979
Letter names and phonological awareness help children to learn letter-sound relations.
Cardoso-Martins, Cláudia; Mesquita, Tereza Cristina Lara; Ehri, Linnea
2011-05-01
Two experimental training studies with Portuguese-speaking preschoolers in Brazil were conducted to investigate whether children benefit from letter name knowledge and phonological awareness in learning letter-sound relations. In Experiment 1, two groups of children were compared. The experimental group was taught the names of letters whose sounds occur either at the beginning (e.g., the letter /be/) or in the middle (e.g., the letter /'eli/) of the letter name. The control group was taught the shapes of the letters but not their names. Then both groups were taught the sounds of the letters. Results showed an advantage for the experimental group, but only for beginning-sound letters. Experiment 2 investigated whether training in phonological awareness could boost the learning of letter sounds, particularly middle-sound letters. In addition to learning the names of beginning- and middle-sound letters, children in the experimental group were taught to categorize words according to rhyme and alliteration, whereas controls were taught to categorize the same words semantically. All children were then taught the sounds of the letters. Results showed that children who were given phonological awareness training found it easier to learn letter sounds than controls. This was true for both types of letters, but especially for middle-sound letters. Copyright © 2011. Published by Elsevier Inc.
Cell type-specific suppression of mechanosensitive genes by audible sound stimulation.
Kumeta, Masahiro; Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H
2018-01-01
Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound.
Cell type-specific suppression of mechanosensitive genes by audible sound stimulation
Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H.
2018-01-01
Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound. PMID:29385174
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
Branstetter, Brian K; DeLong, Caroline M; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly
2016-01-01
Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.
Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly
2016-01-01
Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519
Cerebellar contribution to the prediction of self-initiated sounds.
Knolle, Franziska; Schröger, Erich; Kotz, Sonja A
2013-10-01
In everyday life we frequently make the fundamental distinction between sensory input resulting from our own actions and sensory input that is externally-produced. It has been speculated that making this distinction involves the use of an internal forward-model, which enables the brain to adjust its response to self-produced sensory input. In the auditory domain, this idea has been supported by event-related potential and evoked-magnetic field studies revealing that self-initiated sounds elicit a suppressed N100/M100 brain response compared to externally-produced sounds. Moreover, a recent study reveals that patients with cerebellar lesions do not show a significant N100-suppression effect. This result supports the theory that the cerebellum is essential for generating internal forward predictions. However, all except one study compared self-initiated and externally-produced auditory stimuli in separate conditions. Such a setup prevents an unambiguous interpretation of the N100-suppression effect when distinguishing self- and externally-produced sensory stimuli: the N100-suppression can also be explained by differences in the allocation of attention in different conditions. In the current electroencephalography (EEG)-study we investigated the N100-suppression effect in an altered design comparing (i) self-initiated sounds to externally-produced sounds that occurred intermixed with these self-initiated sounds (i.e., both sound types occurred in the same condition) or (ii) self-initiated sounds to externally-produced sounds that occurred in separate conditions. Results reveal that the cerebellum generates selective predictions in response to self-initiated sounds independent of condition type: cerebellar patients, in contrast to healthy controls, do not display an N100-suppression effect in response to self-initiated sounds when intermixed with externally-produced sounds. Furthermore, the effect is not influenced by the temporal proximity of externally-produced sounds to self-produced sounds. Controls and patients showed a P200-reduction in response to self-initiated sounds. This suggests the existence of an additional and probably more conscious mechanism for identifying self-generated sounds that does not functionally depend on the cerebellum. Copyright © 2012 Elsevier Srl. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-16
...; high-pitched sounds contain high frequencies and low-pitched sounds contain low frequencies. Natural... estimated to occur between approximately 150 Hz and 160 kHz. High-frequency cetaceans (eight species of true... masking by high frequency sound. Human data indicate low-frequency sound can mask high-frequency sounds (i...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
33 CFR 67.30-10 - Sound signals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 67.30-10 Section... Sound signals. (a) The owner of a Class “C” structure shall install a sound signal if: (1) The structure...) Sound signals required by paragraph (a) of this section must have rated range of at least one-half mile...
Using Incremental Rehearsal to Teach Letter Sounds to English Language Learners
ERIC Educational Resources Information Center
Rahn, Naomi L.; Wilson, Jennifer; Egan, Andrea; Brandes, Dana; Kunkel, Amy; Peterson, Meredith; McComas, Jennifer
2015-01-01
This study examined the effects of incremental rehearsal (IR) on letter sound expression for one kindergarten and one first grade English learner who were below district benchmark for letter sound fluency. A single-subject multiple-baseline design across sets of unknown letter sounds was used to evaluate the effect of IR on letter-sound expression…
33 CFR 67.30-10 - Sound signals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 67.30-10 Section... Sound signals. (a) The owner of a Class “C” structure shall install a sound signal if: (1) The structure...) Sound signals required by paragraph (a) of this section must have rated range of at least one-half mile...
33 CFR 83.33 - Equipment for sound signals (Rule 33).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Equipment for sound signals (Rule... INLAND NAVIGATION RULES RULES Sound and Light Signals § 83.33 Equipment for sound signals (Rule 33). (a... gong, the tone and sound of which cannot be confused with that of the bell. The whistle, bell and gong...
33 CFR 83.33 - Equipment for sound signals (Rule 33).
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Equipment for sound signals (Rule... INLAND NAVIGATION RULES RULES Sound and Light Signals § 83.33 Equipment for sound signals (Rule 33). (a... gong, the tone and sound of which cannot be confused with that of the bell. The whistle, bell and gong...
33 CFR 83.33 - Equipment for sound signals (Rule 33).
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Equipment for sound signals (Rule... INLAND NAVIGATION RULES RULES Sound and Light Signals § 83.33 Equipment for sound signals (Rule 33). (a... gong, the tone and sound of which cannot be confused with that of the bell. The whistle, bell and gong...
33 CFR 83.33 - Equipment for sound signals (Rule 33).
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Equipment for sound signals (Rule... INLAND NAVIGATION RULES RULES Sound and Light Signals § 83.33 Equipment for sound signals (Rule 33). (a... gong, the tone and sound of which cannot be confused with that of the bell. The whistle, bell and gong...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Passenger Vessel Protection, Puget Sound and adjacent waters, Washington. 165.1317 Section 165.1317... Vessel Protection, Puget Sound and adjacent waters, Washington. (a) Notice of enforcement or suspension... be enforced only upon notice by the Captain of the Port Puget Sound. Captain of the Port Puget Sound...
Code of Federal Regulations, 2012 CFR
2012-07-01
... ship protection, Puget Sound and adjacent waters, Washington 165.1313 Section 165.1313 Navigation and... Sound and adjacent waters, Washington (a) Notice of enforcement or suspension of enforcement. The tank... Port Puget Sound. Captain of the Port Puget Sound will cause notice of the enforcement of the tank ship...
The Impact of Eliminating Extraneous Sound and Light on Students' Achievement: An Empirical Study
ERIC Educational Resources Information Center
Mangipudy, Rajarajeswari
2010-01-01
The impact of eliminating extraneous sound and light on students' achievement was investigated under four conditions: Light and Sound controlled, Sound Only controlled, Light Only controlled and neither Light nor Sound controlled. Group, age and gender were the control variables. Four randomly selected groups of high school freshmen students with…
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
33 CFR 67.30-10 - Sound signals.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 67.30-10 Section... Sound signals. (a) The owner of a Class “C” structure shall install a sound signal if: (1) The structure...) Sound signals required by paragraph (a) of this section must have rated range of at least one-half mile...
33 CFR 67.30-10 - Sound signals.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 67.30-10 Section... Sound signals. (a) The owner of a Class “C” structure shall install a sound signal if: (1) The structure...) Sound signals required by paragraph (a) of this section must have rated range of at least one-half mile...
33 CFR 67.30-10 - Sound signals.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 67.30-10 Section... Sound signals. (a) The owner of a Class “C” structure shall install a sound signal if: (1) The structure...) Sound signals required by paragraph (a) of this section must have rated range of at least one-half mile...
33 CFR 67.10-20 - Sound signal tests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signal tests. 67.10-20... signals § 67.10-20 Sound signal tests. (a) Sound signal tests must: (1) Be made by the applicant in the... meters; and (3) Be made in an anechoic chamber large enough to accommodate the entire sound signal, as if...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Puget Sound and Adjacent Waters... Areas Thirteenth Coast Guard District § 165.1301 Puget Sound and Adjacent Waters in Northwestern... northwestern Washington waters under the jurisdiction of the Captain of the Port, Puget Sound: Puget Sound...
33 CFR 83.33 - Equipment for sound signals (Rule 33).
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Equipment for sound signals (Rule... INLAND NAVIGATION RULES RULES Sound and Light Signals § 83.33 Equipment for sound signals (Rule 33). (a... gong, the tone and sound of which cannot be confused with that of the bell. The whistle, bell and gong...
Code of Federal Regulations, 2011 CFR
2011-07-01
... ship protection, Puget Sound and adjacent waters, Washington 165.1313 Section 165.1313 Navigation and... Sound and adjacent waters, Washington (a) Notice of enforcement or suspension of enforcement. The tank... Port Puget Sound. Captain of the Port Puget Sound will cause notice of the enforcement of the tank ship...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Passenger Vessel Protection, Puget Sound and adjacent waters, Washington. 165.1317 Section 165.1317... Vessel Protection, Puget Sound and adjacent waters, Washington. (a) Notice of enforcement or suspension... be enforced only upon notice by the Captain of the Port Puget Sound. Captain of the Port Puget Sound...
Teaching letter sounds to kindergarten English language learners using incremental rehearsal.
Peterson, Meredith; Brandes, Dana; Kunkel, Amy; Wilson, Jennifer; Rahn, Naomi L; Egan, Andrea; McComas, Jennifer
2014-02-01
Proficiency in letter-sound correspondence is important for decoding connected text. This study examined the effects of an evidence-based intervention, incremental rehearsal (IR), on the letter-sound expression of three kindergarten English language learners (ELLs) performing below the district benchmark for letter-sound fluency. Participants were native speakers of Hmong, Spanish, and Polish. A multiple-baseline design across sets of unknown letter sounds was used to evaluate the effects of IR on letter-sound expression. Visual analysis of the data showed an increase in level and trend when IR was introduced in each phase. Percentage of all non-overlapping data (PAND) ranged from 95% to 100%. All participants exceeded expected growth and reached the spring district benchmark for letter-sound fluency. Results suggest that IR is a promising intervention for increasing letter-sound expression for ELLs who evidence delays in acquiring letter sounds. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Laboratory studies of scales for measuring helicopter noise
NASA Technical Reports Server (NTRS)
Ollerhead, J. B.
1982-01-01
The adequacy of the effective perceived noise level (EPNL) procedure for rating helicopter noise annoyance was investigated. Recordings of 89 helicopters and 30 fixed wing aircraft (CTOL) flyover sounds were rated with respect to annoyance by groups of approximately 40 subjects. The average annoyance scores were transformed to annoyance levels defined as the equally annoying sound levels of a fixed reference sound. The sound levels of the test sounds were measured on various scales, with and without corrections for duration, tones, and impulsiveness. On average, the helicopter sounds were judged equally annoying to CTOL sounds when their duration corrected levels are approximately 2 dB higher. Multiple regression analysis indicated that, provided the helicopter/CTOL difference of about 2 dB is taken into account, the particular linear combination of level, duration, and tone corrections inherent in EPNL is close to optimum. The results reveal no general requirement for special EPNL correction terms to penalize helicopter sounds which are particularly impulsive; impulsiveness causes spectral and temporal changes which themselves adequately amplify conventionally measured sound levels.
Relationship Between Speed of Sound in and Density of Normal and Diseased Rat Livers
NASA Astrophysics Data System (ADS)
Hachiya, Hiroyuki; Ohtsuki, Shigeo; Tanaka, Motonao
1994-05-01
Speed of sound is an important acoustic parameter for quantitative characterization of living tissues. In this paper, the relationship between speed of sound in and density of rat liver tissues are investigated. The speed of sound was measured by the nondeformable technique based on frequency-time analysis of a 3.5 MHz pulse response. The speed of sound in normal livers varied minimally between individuals and was not related to body weight or age. In liver tissues which were administered CCl4, the speed of sound was lower than the speed of sound in normal tissues. The relationship between speed of sound and density in normal, fatty and cirrhotic livers can be fitted well on the line which is estimated using the immiscible liquid model assuming a mixture of normal liver and fat tissues. For 3.5 MHz ultrasound, it is considered that the speed of sound in fresh liver with fatty degeneration is responsible for the fat content and is not strongly dependent on the degree of fibrosis.
Sound level exposure of high-risk infants in different environmental conditions.
Byers, Jacqueline F; Waugh, W Randolph; Lowman, Linda B
2006-01-01
To provide descriptive information about the sound levels to which high-risk infants are exposed in various actual environmental conditions in the NICU, including the impact of physical renovation on sound levels, and to assess the contributions of various types of equipment, alarms, and activities to sound levels in simulated conditions in the NICU. Descriptive and comparative design. Convenience sample of 134 infants at a southeastern quarternary children's hospital. A-weighted decibel (dBA) sound levels under various actual and simulated environmental conditions. The renovated NICU was, on average, 4-6 dBA quieter across all environmental conditions than a comparable nonrenovated room, representing a significant sound level reduction. Sound levels remained above consensus recommendations despite physical redesign and staff training. Respiratory therapy equipment, alarms, staff talking, and infant fussiness contributed to higher sound levels. Evidence-based sound-reducing strategies are proposed. Findings were used to plan environment management as part of a developmental, family-centered care, performance improvement program and in new NICU planning.
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA
NASA Astrophysics Data System (ADS)
Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng
2011-12-01
The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.
Evidence for distinct human auditory cortex regions for sound location versus identity processing
Ahveninen, Jyrki; Huang, Samantha; Nummenmaa, Aapo; Belliveau, John W.; Hung, An-Yi; Jääskeläinen, Iiro P.; Rauschecker, Josef P.; Rossi, Stephanie; Tiitinen, Hannu; Raij, Tommi
2014-01-01
Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC. PMID:24121634
Durai, Mithila; Searchfield, Grant D
2017-01-01
Objectives: A randomized cross-over trial in 18 participants tested the hypothesis that nature sounds, with unpredictable temporal characteristics and high valence would yield greater improvement in tinnitus than constant, emotionally neutral broadband noise. Study Design: The primary outcome measure was the Tinnitus Functional Index (TFI). Secondary measures were: loudness and annoyance ratings, loudness level matches, minimum masking levels, positive and negative emotionality, attention reaction and discrimination time, anxiety, depression and stress. Each sound was administered using MP3 players with earbuds for 8 continuous weeks, with a 3 week wash-out period before crossing over to the other treatment sound. Measurements were undertaken for each arm at sound fitting, 4 and 8 weeks after administration. Qualitative interviews were conducted at each of these appointments. Results: From a baseline TFI score of 41.3, sound therapy resulted in TFI scores at 8 weeks of 35.6; broadband noise resulted in significantly greater reduction (8.2 points) after 8 weeks of sound therapy use than nature sounds (3.2 points). The positive effect of sound on tinnitus was supported by secondary outcome measures of tinnitus, emotion, attention, and psychological state, but not interviews. Tinnitus loudness level match was higher for BBN at 8 weeks; while there was little change in loudness level matches for nature sounds. There was no change in minimum masking levels following sound therapy administration. Self-reported preference for one sound over another did not correlate with changes in tinnitus. Conclusions: Modeled under an adaptation level theory framework of tinnitus perception, the results indicate that the introduction of broadband noise shifts internal adaptation level weighting away from the tinnitus signal, reducing tinnitus magnitude. Nature sounds may modify the affective components of tinnitus via a secondary, residual pathway, but this appears to be less important for sound effectiveness. The different rates of adaptation to broadband noise and nature sound by the auditory system may explain the different tinnitus loudness level matches. In addition to group effects there also appears to be a great deal of individual variation. A sound therapy framework based on adaptation level theory is proposed that accounts for individual variation in preference and response to sound. Clinical Trial Registration: www.anzctr.org.au, identifier #12616000742471.
Durai, Mithila; Searchfield, Grant D.
2017-01-01
Objectives: A randomized cross-over trial in 18 participants tested the hypothesis that nature sounds, with unpredictable temporal characteristics and high valence would yield greater improvement in tinnitus than constant, emotionally neutral broadband noise. Study Design: The primary outcome measure was the Tinnitus Functional Index (TFI). Secondary measures were: loudness and annoyance ratings, loudness level matches, minimum masking levels, positive and negative emotionality, attention reaction and discrimination time, anxiety, depression and stress. Each sound was administered using MP3 players with earbuds for 8 continuous weeks, with a 3 week wash-out period before crossing over to the other treatment sound. Measurements were undertaken for each arm at sound fitting, 4 and 8 weeks after administration. Qualitative interviews were conducted at each of these appointments. Results: From a baseline TFI score of 41.3, sound therapy resulted in TFI scores at 8 weeks of 35.6; broadband noise resulted in significantly greater reduction (8.2 points) after 8 weeks of sound therapy use than nature sounds (3.2 points). The positive effect of sound on tinnitus was supported by secondary outcome measures of tinnitus, emotion, attention, and psychological state, but not interviews. Tinnitus loudness level match was higher for BBN at 8 weeks; while there was little change in loudness level matches for nature sounds. There was no change in minimum masking levels following sound therapy administration. Self-reported preference for one sound over another did not correlate with changes in tinnitus. Conclusions: Modeled under an adaptation level theory framework of tinnitus perception, the results indicate that the introduction of broadband noise shifts internal adaptation level weighting away from the tinnitus signal, reducing tinnitus magnitude. Nature sounds may modify the affective components of tinnitus via a secondary, residual pathway, but this appears to be less important for sound effectiveness. The different rates of adaptation to broadband noise and nature sound by the auditory system may explain the different tinnitus loudness level matches. In addition to group effects there also appears to be a great deal of individual variation. A sound therapy framework based on adaptation level theory is proposed that accounts for individual variation in preference and response to sound. Clinical Trial Registration: www.anzctr.org.au, identifier #12616000742471. PMID:28337139
Characteristic sounds facilitate visual search.
Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2008-06-01
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.
Application of a finite-element model to low-frequency sound insulation in dwellings.
Maluski, S P; Gibbs, B M
2000-10-01
The sound transmission between adjacent rooms has been modeled using a finite-element method. Predicted sound-level difference gave good agreement with experimental data using a full-scale and a quarter-scale model. Results show that the sound insulation characteristics of a party wall at low frequencies strongly depend on the modal characteristics of the sound field of both rooms and of the partition. The effect of three edge conditions of the separating wall on the sound-level difference at low frequencies was examined: simply supported, clamped, and a combination of clamped and simply supported. It is demonstrated that a clamped partition provides greater sound-level difference at low frequencies than a simply supported. It also is confirmed that the sound-pressure level difference is lower in equal room than in unequal room configurations.
Sound production by singing humpback whales.
Mercado, Eduardo; Schneider, Jennifer N; Pack, Adam A; Herman, Louis M
2010-04-01
Sounds from humpback whale songs were analyzed to evaluate possible mechanisms of sound production. Song sounds fell along a continuum with trains of discrete pulses at one end and continuous tonal signals at the other. This graded vocal repertoire is comparable to that seen in false killer whales [Murray et al. (1998). J. Acoust. Soc. Am. 104, 1679-1688] and human singers, indicating that all three species generate sounds by varying the tension of pneumatically driven, vibrating membranes. Patterns in the spectral content of sounds and in nonlinear sound features show that resonating air chambers may also contribute to humpback whale sound production. Collectively, these findings suggest that categorizing individual units within songs into discrete types may obscure how singers modulate song features and illustrate how production-based characterizations of vocalizations can provide new insights into how humpback whales sing.
Sigmundsson, Hermundur; Eriksen, Adrian D.; Ofteland, Greta Storm; Haga, Monika
2017-01-01
This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects. PMID:28951726
Spherical loudspeaker array for local active control of sound.
Rafaely, Boaz
2009-05-01
Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.
Framing sound: Using expectations to reduce environmental noise annoyance.
Crichton, Fiona; Dodd, George; Schmid, Gian; Petrie, Keith J
2015-10-01
Annoyance reactions to environmental noise, such as wind turbine sound, have public health implications given associations between annoyance and symptoms related to psychological distress. In the case of wind farms, factors contributing to noise annoyance have been theorised to include wind turbine sound characteristics, the noise sensitivity of residents, and contextual aspects, such as receiving information creating negative expectations about sound exposure. The experimental aim was to assess whether receiving positive or negative expectations about wind farm sound would differentially influence annoyance reactions during exposure to wind farm sound, and also influence associations between perceived noise sensitivity and noise annoyance. Sixty volunteers were randomly assigned to receive either negative or positive expectations about wind farm sound. Participants in the negative expectation group viewed a presentation which incorporated internet material indicating that exposure to wind turbine sound, particularly infrasound, might present a health risk. Positive expectation participants viewed a DVD which framed wind farm sound positively and included internet information about the health benefits of infrasound exposure. Participants were then simultaneously exposed to sub-audible infrasound and audible wind farm sound during two 7 min exposure sessions, during which they assessed their experience of annoyance. Positive expectation participants were significantly less annoyed than negative expectation participants, while noise sensitivity only predicted annoyance in the negative group. Findings suggest accessing negative information about sound is likely to trigger annoyance, particularly in noise sensitive people and, importantly, portraying sound positively may reduce annoyance reactions, even in noise sensitive individuals. Copyright © 2015 Elsevier Inc. All rights reserved.
How learning to abstract shapes neural sound representations
Ley, Anke; Vroomen, Jean; Formisano, Elia
2014-01-01
The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations. PMID:24917783
How Do “Mute” Cicadas Produce Their Calling Songs?
Luo, Changqing; Wei, Cong; Nansen, Christian
2015-01-01
Insects have evolved a variety of structures and mechanisms to produce sounds, which are used for communication both within and between species. Among acoustic insects, cicada males are particularly known for their loud and diverse sounds which function importantly in communication. The main method of sound production in cicadas is the tymbal mechanism, and a relative small number of cicada species possess both tymbal and stridulatory organs. However, cicadas of the genus Karenia do not have any specialized sound-producing structures, so they are referred to as “mute”. This denomination is quite misleading, as they indeed produce sounds. Here, we investigate the sound-producing mechanism and acoustic communication of the “mute” cicada, Karenia caelatata, and discover a new sound-production mechanism for cicadas: i.e., K. caelatata produces impact sounds by banging the forewing costa against the operculum. The temporal, frequency and amplitude characteristics of the impact sounds are described. Morphological studies and reflectance-based analyses reveal that the structures involved in sound production of K. caelatata (i.e., forewing, operculum, cruciform elevation, and wing-holding groove on scutellum) are all morphologically modified. Acoustic playback experiments and behavioral observations suggest that the impact sounds of K. caelatata are used in intraspecific communication and function as calling songs. The new sound-production mechanism expands our knowledge on the diversity of acoustic signaling behavior in cicadas and further underscores the need for more bioacoustic studies on cicadas which lack tymbal mechanism. PMID:25714608
Bronchial intubation could be detected by the visual stethoscope techniques in pediatric patients.
Kimura, Tetsuro; Suzuki, Akira; Mimuro, Soichiro; Makino, Hiroshi; Sato, Shigehito
2012-12-01
We created a system that allows the visualization of breath sounds (visual stethoscope). We compared the visual stethoscope technique with auscultation for the detection of bronchial intubation in pediatric patients. In the auscultation group, an anesthesiologist advanced the tracheal tube, while another anesthesiologist auscultated bilateral breath sounds to detect the change and/or disappearance of unilateral breath sounds. In the visualization group, the stethoscope was used to detect changes in breath sounds and/or disappearance of unilateral breath sounds. The distance from the edge of the mouth to the carina was measured using a fiberoptic bronchoscope. Forty pediatric patients were enrolled in the study. At the point at which irregular breath sounds were auscultated, the tracheal tube was located at 0.5 ± 0.8 cm on the bronchial side from the carina. When a detectable change of shape of the visualized breath sound was observed, the tracheal tube was located 0.1 ± 1.2 cm on the bronchial side (not significant). At the point at which unilateral breath sounds were auscultated or a unilateral shape of the visualized breath sound was observed, the tracheal tube was 1.5 ± 0.8 or 1.2 ± 1.0 cm on the bronchial side, respectively (not significant). The visual stethoscope allowed to display the left and the right lung sound simultaneously and detected changes of breath sounds and unilateral breath sound as a tracheal tube was advanced. © 2012 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Yun, Dong-Un; Lee, Sang-Kwon
2017-06-01
In this paper, we present a novel method for an objective evaluation of knocking noise emitted by diesel engines based on the temporal and frequency masking theory. The knocking sound of a diesel engine is a vibro-acoustic sound correlated with the high-frequency resonances of the engine structure and a periodic impulsive sound with amplitude modulation. Its period is related to the engine speed and includes specific frequency bands related to the resonances of the engine structure. A knocking sound with the characteristics of a high-frequency impulsive wave can be masked by low-frequency sounds correlated with the harmonics of the firing frequency and broadband noise. The degree of modulation of the knocking sound signal was used for such objective evaluations in previous studies, without considering the masking effect. However, the frequency masking effect must be considered for the objective evaluation of the knocking sound. In addition to the frequency masking effect, the temporal masking effect occurs because the period of the knocking sound changes according to the engine speed. Therefore, an evaluation method considering the temporal and frequency masking effect is required to analyze the knocking sound objectively. In this study, an objective evaluation method considering the masking effect was developed based on the masking theory of sound and signal processing techniques. The method was applied successfully for the objective evaluation of the knocking sound of a diesel engine.
[A new medical education using a lung sound auscultation simulator called "Mr. Lung"].
Yoshii, Chiharu; Anzai, Takashi; Yatera, Kazuhiro; Kawajiri, Tatsunori; Nakashima, Yasuhide; Kido, Masamitsu
2002-09-01
We developed a lung sound auscultation simulator "Mr. Lung" in 2001. To improve the auscultation skills of lung sounds, we utilized this new device in our educational training facility. From June 2001 to March 2002, we used "Mr. Lung" for our small group training in which one hundred of the fifth year medical students were divided into small groups from which one group was taught every other week. The class consisted of ninety-minute training periods for auscultation of lung sounds. At first, we explained the classification of lung sounds, and then auscultation tests were performed. Namely, students listened to three cases of abnormal or adventitious lung sounds on "Mr. Lung" through their stethoscopes. Next they answered questions corresponding to the portion and quality of the sounds. Then, we explained the correct answers and how to differentiate lung sounds on "Mr. Lung". Additionally, at the beginning and the end of the lecture, five degrees of self-assessment for the auscultation of the lung sounds were performed. The ratio of correct answers for lung sounds were 36.9% for differences between bilateral lung sounds, 52.5% for coarse crackles, 34.1% for fine crackles, 69.2% for wheezes, 62.1% for rhonchi and 22.2% for stridor. Self-assessment scores were significantly higher after the class than before. The ratio of correct lung sound answers was surprisingly low among medical students. We believe repetitive auscultation of the simulator to be extremely helpful for medical education.
English Orthographic Learning in Chinese-L1 Young EFL Beginners.
Cheng, Yu-Lin
2017-12-01
English orthographic learning, among Chinese-L1 children who were beginning to learn English as a foreign language, was documented when: (1) only visual memory was at their disposal, (2) visual memory and either some letter-sound knowledge or some semantic information was available, and (3) visual memory, some letter-sound knowledge and some semantic information were all available. When only visual memory was available, orthographic learning (measured via an orthographic choice test) was meagre. Orthographic learning was significant when either semantic information or letter-sound knowledge supplemented visual memory, with letter-sound knowledge generating greater significance. Although the results suggest that letter-sound knowledge plays a more important role than semantic information, letter-sound knowledge alone does not suffice to achieve perfect orthographic learning, as orthographic learning was greatest when letter-sound knowledge and semantic information were both available. The present findings are congruent with a view that the orthography of a foreign language drives its orthographic learning more than L1 orthographic learning experience, thus extending Share's (Cognition 55:151-218, 1995) self-teaching hypothesis to include non-alphabetic L1 children's orthographic learning of an alphabetic foreign language. The little letter-sound knowledge development observed in the experiment-I control group indicates that very little letter-sound knowledge develops in the absence of dedicated letter-sound training. Given the important role of letter-sound knowledge in English orthographic learning, dedicated letter-sound instruction is highly recommended.
Excessive exposure of sick neonates to sound during transport
Buckland, L; Austin, N; Jackson, A; Inder, T
2003-01-01
Objective: To determine the levels of sound to which infants are exposed during routine transport by ambulance, aircraft, and helicopter. Design: Sound levels during 38 consecutive journeys from a regional level III neonatal intensive care unit were recorded using a calibrated data logging sound meter (Quest 2900). The meter was set to record "A" weighted slow response integrated sound levels, which emulates the response of the human ear, and "C" weighted response sound levels as a measure of total sound level exposure for all frequencies. The information was downloaded to a computer using MS HyperTerminal. The resulting data were stored, and a graphical profile was generated for each journey using SigmaPlot software. Setting: Eight journeys involved ambulance transport on country roads, 24 involved fixed wing aircraft, and four were by helicopter. Main outcome measures: Relations between decibel levels and events or changes in transport mode were established by correlating the time logged on the sound meter with the standard transport documentation sheet. Results: The highest sound levels were recorded during air transport. However, mean sound levels for all modes of transport exceeded the recommended levels for neonatal intensive care. The maximum sound levels recorded were extremely high at greater than 80 dB in the "A" weighted hearing range and greater than 120 dB in the total frequency range. Conclusions: This study raises major concerns about the excessive exposure of the sick newborn to sound during transportation. PMID:14602701
Ladich, Friedrich; Schleinzer, Günter
2015-04-01
Sound communication comprising the production and detection of acoustic signals is affected by ambient temperature in ectothermic animals. In the present study we investigated the effects of temperature on sound production and characteristics in the croaking gourami Trichopsis vittata, a freshwater fish from Southeast Asia possessing a highly specialized sound-generating mechanism found only in a single genus. The croaking gourami produces pulsed sounds by stretching and plucking two enhanced pectoral fin tendons during rapid pectoral fin beating. Croaking sounds typically consist of a series of double-pulsed bursts with main energies between 1 and 1.5 kHz. Sounds were recorded during dyadic contests between two males at three different temperatures (25°, 30° and 35°C). The mean dominant frequency increased with rising temperature from 1.18 to 1.33 kHz, whereas temporal characteristics decreased. The sound interval dropped from 492 to 259 ms, the burst period from 51 to 35 ms and the pulse period from 5.8 to 5.1 ms. In contrast, the number of sounds and number of bursts within a sound were not affected by temperature. The current study shows that spectral and temporal characteristics of sounds are affected in different ways by temperature in the croaking gourami, whereas the numbers of sounds and bursts remain unaffected. We conclude that acoustic communication in gouramis is affected by changes in ambient temperature. Copyright © 2014 Elsevier Inc. All rights reserved.
Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.
2008-01-01
In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.
Kastelein, R A; Verboom, W C; Muijsers, M; Jennings, N V; van der Heul, S
2005-05-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network is currently under development: Acoustic Communication network for Monitoring of underwater Environment in coastal areas (ACME). Marine mammals might be affected by ACME sounds since they use sounds of similar frequencies (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour porpoise. Therefore, as part of an environmental impact assessment program, two captive harbour porpoises were subjected to four sounds, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' positions and respiration rates during a test period with those during a baseline period. Each of the four sounds could be made a deterrent by increasing the amplitude of the sound. The porpoises reacted by swimming away from the sounds and by slightly, but significantly, increasing their respiration rate. From the sound pressure level distribution in the pen, and the distribution of the animals during test sessions, discomfort sound level thresholds were determined for each sound. In combination with information on sound propagation in the areas where the communication system may be deployed, the extent of the 'discomfort zone' can be estimated for several source levels (SLs). The discomfort zone is defined as the area around a sound source that harbour porpoises are expected to avoid. Based on these results, SLs can be selected that have an acceptable effect on harbour porpoises in particular areas. The discomfort zone of a communication sound depends on the selected sound, the selected SL, and the propagation characteristics of the area in which the sound system is operational. In shallow, winding coastal water courses, with sandbanks, etc., the type of habitat in which the ACME sounds will be produced, propagation loss cannot be accurately estimated by using a simple propagation model, but should be measured on site. The SL of the communication system should be adapted to each area (taking into account bounding conditions created by narrow channels, sound propagation variability due to environmental factors, and the importance of an area to the affected species). The discomfort zone should not prevent harbour porpoises from spending sufficient time in ecologically important areas (for instance feeding areas), or routes towards these areas.
Model-based synthesis of aircraft noise to quantify human perception of sound quality and annoyance
NASA Astrophysics Data System (ADS)
Berckmans, D.; Janssens, K.; Van der Auweraer, H.; Sas, P.; Desmet, W.
2008-04-01
This paper presents a method to synthesize aircraft noise as perceived on the ground. The developed method gives designers the opportunity to make a quick and economic evaluation concerning sound quality of different design alternatives or improvements on existing aircraft. By presenting several synthesized sounds to a jury, it is possible to evaluate the quality of different aircraft sounds and to construct a sound that can serve as a target for future aircraft designs. The combination of using a sound synthesis method that can perform changes to a recorded aircraft sound together with executing jury tests allows to quantify the human perception of aircraft noise.
A Review: Characteristics of Noise Absorption Material
NASA Astrophysics Data System (ADS)
Amares, S.; Sujatmika, E.; Hong, T. W.; Durairaj, R.; Hamid, H. S. H. B.
2017-10-01
Noise is always treated as a nuisance to human and even noise pollution appears in the environmental causing discomfort. This also concerns the engineering design that tends to cultivate this noise propagation. Solution such as using material to absorb the sound have been widely used. The fundamental of the sound absorbing propagation, sound absorbing characteristics and its factors are minimally debated. Furthermore, the method in order to pertain sound absorbing related to the sound absorption coefficient is also limited, as many studies only contributes in result basis and very little in literature aspect. This paper revolves in providing better insight on the importance of sound absorption and the materials factors in obtaining the sound absorption coefficient.
Light aircraft sound transmission studies - Noise reduction model
NASA Technical Reports Server (NTRS)
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1987-01-01
Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.
About sound mufflers sound-absorbing panels aircraft engine
NASA Astrophysics Data System (ADS)
Dudarev, A. S.; Bulbovich, R. V.; Svirshchev, V. I.
2016-10-01
The article provides a formula for calculating the frequency of sound absorbed panel with a perforated wall. And although the sound absorbing structure is a set of resonators Helmholtz, not individual resonators should be considered in acoustic calculations, and all the perforated wall panel. The analysis, showing how the parameters affect the size and sound-absorbing structures in the absorption rate.
33 CFR 67.10-40 - Sound signals authorized for use prior to January 1, 1973.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals authorized for use... STRUCTURES General Requirements for Sound signals § 67.10-40 Sound signals authorized for use prior to January 1, 1973. Any sound signal authorized for use by the Coast Guard and manufactured prior to January...
33 CFR 67.10-40 - Sound signals authorized for use prior to January 1, 1973.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals authorized for use... STRUCTURES General Requirements for Sound signals § 67.10-40 Sound signals authorized for use prior to January 1, 1973. Any sound signal authorized for use by the Coast Guard and manufactured prior to January...
33 CFR 67.10-40 - Sound signals authorized for use prior to January 1, 1973.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals authorized for use... STRUCTURES General Requirements for Sound signals § 67.10-40 Sound signals authorized for use prior to January 1, 1973. Any sound signal authorized for use by the Coast Guard and manufactured prior to January...
33 CFR 67.10-40 - Sound signals authorized for use prior to January 1, 1973.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals authorized for use... STRUCTURES General Requirements for Sound signals § 67.10-40 Sound signals authorized for use prior to January 1, 1973. Any sound signal authorized for use by the Coast Guard and manufactured prior to January...
AVE/VAS 4: 25-mb sounding data
NASA Technical Reports Server (NTRS)
Sienkiewicz, M. E.
1983-01-01
The rawinsonde sounding program is described and tabulated data at 25 mb intervals for the 24 stations and 14 special stations participating in the experiment is presented. Sounding were taken at 3 hr intervals. An additional sounding was taken at the normal synoptic observation time. Some soundings were computed from raw ordinate data, while others were interpolated from significant level data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Reporting Points § 161.55 Vessel Traffic Service Puget Sound and the Cooperative Vessel Traffic Service for the Juan de Fuca Region. The Vessel Traffic Service Puget Sound area consists of the navigable waters... Boundary Range C Rear Light). This area includes: Puget Sound, Hood Canal, Possession Sound, the San Juan...
Interpolated Sounding and Gridded Sounding Value-Added Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toto, T.; Jensen, M.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less
The directivity of the sound radiation from panels and openings.
Davy, John L
2009-06-01
This paper presents a method for calculating the directivity of the radiation of sound from a panel or opening, whose vibration is forced by the incidence of sound from the other side. The directivity of the radiation depends on the angular distribution of the incident sound energy in the room or duct in whose wall or end the panel or opening occurs. The angular distribution of the incident sound energy is predicted using a model which depends on the sound absorption coefficient of the room or duct surfaces. If the sound source is situated in the room or duct, the sound absorption coefficient model is used in conjunction with a model for the directivity of the sound source. For angles of radiation approaching 90 degrees to the normal to the panel or opening, the effect of the diffraction by the panel or opening, or by the finite baffle in which the panel or opening is mounted, is included. A simple empirical model is developed to predict the diffraction of sound into the shadow zone when the angle of radiation is greater than 90 degrees to the normal to the panel or opening. The method is compared with published experimental results.
Atyeo, J; Sanderson, P M
2015-07-01
The melodic alarm sound set for medical electrical equipment that was recommended in the International Electrotechnical Commission's IEC 60601-1-8 standard has proven difficult for clinicians to learn and remember, especially clinicians with little prior formal music training. An alarm sound set proposed by Patterson and Edworthy in 1986 might improve performance for such participants. In this study, 31 critical and acute care nurses with less than one year of formal music training identified alarm sounds while they calculated drug dosages. Sixteen nurses used the IEC and 15 used the Patterson-Edworthy alarm sound set. The mean (SD) percentage of alarms correctly identified by nurses was 51.3 (25.6)% for the IEC alarm set and 72.1 (18.8)% for the Patterson-Edworthy alarms (p = 0.016). Nurses using the Patterson-Edworthy alarm sound set reported that it was easier to distinguish between alarm sounds than did nurses using the IEC alarm sound set (p = 0.015). Principles used to construct the Patterson-Edworthy alarm sounds should be adopted for future alarm sound sets. © 2015 The Association of Anaesthetists of Great Britain and Ireland.
Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
PROTAX-Sound: A probabilistic framework for automated animal sound identification
Somervuo, Panu; Ovaskainen, Otso
2017-01-01
Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities. PMID:28863178
PROTAX-Sound: A probabilistic framework for automated animal sound identification.
de Camargo, Ulisses Moliterno; Somervuo, Panu; Ovaskainen, Otso
2017-01-01
Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities.
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Sounds perceived as annoying by hearing-aid users in their daily soundscape.
Skagerstrand, Åsa; Stenfelt, Stefan; Arlinger, Stig; Wikström, Joel
2014-04-01
The noises in modern soundscapes continue to increase and are a major origin for annoyance. For a hearing-impaired person, a hearing aid is often beneficial, but noise and annoying sounds can result in non-use of the hearing aid, temporary or permanently. The purpose of this study was to identify annoying sounds in a daily soundscape for hearing-aid users. A diary was used to collect data where the participants answered four questions per day about annoying sounds in the daily soundscape over a two-week period. Sixty adult hearing-aid users. Of the 60 participants 91% experienced annoying sounds daily when using hearing aids. The annoying sound mentioned by most users, was verbal human sounds, followed by other daily sound sources categorized into 17 groups such as TV/radio, vehicles, and machine tools. When the hearing-aid users were grouped in relation to age, hearing loss, gender, hearing-aid experience, and type of signal processing used in their hearing aids, small and only few significant differences were found when comparing their experience of annoying sounds. The results indicate that hearing-aid users often experience annoying sounds and improved clinical fitting routines may reduce the problem.
Mathematically trivial control of sound using a parametric beam focusing source.
Tanaka, Nobuo; Tanaka, Motoki
2011-01-01
By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.
Recording and Analysis of Bowel Sounds.
Zaborski, Daniel; Halczak, Miroslaw; Grzesiak, Wilhelm; Modrzejewski, Andrzej
2015-01-01
The aim of this study was to construct an electronic bowel sound recording system and determine its usefulness for the diagnosis of appendicitis, mechanical ileus and diffuse peritonitis. A group of 67 subjects aged 17 to 88 years including 15 controls was examined. Bowel sounds were recorded using an electret microphone placed on the right side of the hypogastrium and connected to a laptop computer. The method of adjustable grids (converted into binary matrices) was used for bowel sounds analysis. Significantly, fewer (p ≤ 0.05) sounds were found in the mechanical ileus (1004.4) and diffuse peritonitis (466.3) groups than in the controls (2179.3). After superimposing adjustable binary matrices on combined sounds (interval between sounds <0.01 s), significant relationships (p ≤ 0.05) were found between particular positions in the matrices (row-column) and the patient groups. These included the A1_T1 and A1_T2 positions and mechanical ileus as well as the A1_T2 and A1_T4 positions and appendicitis. For diffuse peritonitis, significant positions were A5_T4 and A1_T4. Differences were noted in the number of sounds and binary matrices in the groups of patients with acute abdominal diseases. Certain features of bowel sounds characteristic of individual abdominal diseases were indicated. BS: bowel sound; APP: appendicitis; IL: mechanical ileus; PE: diffuse peritonitis; CG: control group; NSI: number of sound impulses; NCI: number of combined sound impulses; MBS: mean bit-similarity; TMIN: minimum time between impulses; TMAX: maximum time between impulses; TMEAN: mean time between impulses. Zaborski D, Halczak M, Grzesiak W, Modrzejewski A. Recording and Analysis of Bowel Sounds. Euroasian J Hepato-Gastroenterol 2015;5(2):67-73.
Christensen, Christian Bech; Christensen-Dalsgaard, Jakob; Brandt, Christian; Madsen, Peter Teglberg
2012-01-15
Snakes lack both an outer ear and a tympanic middle ear, which in most tetrapods provide impedance matching between the air and inner ear fluids and hence improve pressure hearing in air. Snakes would therefore be expected to have very poor pressure hearing and generally be insensitive to airborne sound, whereas the connection of the middle ear bone to the jaw bones in snakes should confer acute sensitivity to substrate vibrations. Some studies have nevertheless claimed that snakes are quite sensitive to both vibration and sound pressure. Here we test the two hypotheses that: (1) snakes are sensitive to sound pressure and (2) snakes are sensitive to vibrations, but cannot hear the sound pressure per se. Vibration and sound-pressure sensitivities were quantified by measuring brainstem evoked potentials in 11 royal pythons, Python regius. Vibrograms and audiograms showed greatest sensitivity at low frequencies of 80-160 Hz, with sensitivities of -54 dB re. 1 m s(-2) and 78 dB re. 20 μPa, respectively. To investigate whether pythons detect sound pressure or sound-induced head vibrations, we measured the sound-induced head vibrations in three dimensions when snakes were exposed to sound pressure at threshold levels. In general, head vibrations induced by threshold-level sound pressure were equal to or greater than those induced by threshold-level vibrations, and therefore sound-pressure sensitivity can be explained by sound-induced head vibration. From this we conclude that pythons, and possibly all snakes, lost effective pressure hearing with the complete reduction of a functional outer and middle ear, but have an acute vibration sensitivity that may be used for communication and detection of predators and prey.
Encoding of sound envelope transients in the auditory cortex of juvenile rats and adult rats.
Lu, Qi; Jiang, Cuiping; Zhang, Jiping
2016-02-01
Accurate neural processing of time-varying sound amplitude and spectral information is vital for species-specific communication. During postnatal development, cortical processing of sound frequency undergoes progressive refinement; however, it is not clear whether cortical processing of sound envelope transients also undergoes age-related changes. We determined the dependence of neural response strength and first-spike latency on sound rise-fall time across sound levels in the primary auditory cortex (A1) of juvenile (P20-P30) rats and adult (8-10 weeks) rats. A1 neurons were categorized as "all-pass", "short-pass", or "mixed" ("all-pass" at high sound levels to "short-pass" at lower sound levels) based on the normalized response strength vs. rise-fall time functions across sound levels. The proportions of A1 neurons within each of the three categories in juvenile rats were similar to that in adult rats. In general, with increasing rise-fall time, the average response strength decreased and the average first-spike latency increased in A1 neurons of both groups. At a given sound level and rise-fall time, the average normalized neural response strength did not differ significantly between the two age groups. However, the A1 neurons in juvenile rats showed greater absolute response strength, longer first-spike latency compared to those in adult rats. In addition, at a constant sound level, the average first-spike latency of juvenile A1 neurons was more sensitive to changes in rise-fall time. Our results demonstrate the dependence of the responses of rat A1 neurons on sound rise-fall time, and suggest that the response latency exhibit some age-related changes in cortical representation of sound envelope rise time. Copyright © 2015 Elsevier Ltd. All rights reserved.
Underwater Sound Propagation from Marine Pile Driving.
Reyff, James A
2016-01-01
Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.
Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.
Bolle, Loes J; de Jong, Christ A F; Bierman, Stijn M; van Beek, Pieter J G; van Keeken, Olvin A; Wessels, Peter W; van Damme, Cindy J G; Winter, Hendrik V; de Haan, Dick; Dekeling, René P A
2012-01-01
In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2) (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa(2)s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2)s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.
Common Sole Larvae Survive High Levels of Pile-Driving Sound in Controlled Exposure Experiments
Bolle, Loes J.; de Jong, Christ A. F.; Bierman, Stijn M.; van Beek, Pieter J. G.; van Keeken, Olvin A.; Wessels, Peter W.; van Damme, Cindy J. G.; Winter, Hendrik V.; de Haan, Dick; Dekeling, René P. A.
2012-01-01
In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa2 (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised. PMID:22431996
Tanaka, Kazunori; Ogawa, Munehiro; Inagaki, Yusuke; Tanaka, Yasuhito; Nishikawa, Hitoshi; Hattori, Koji
2017-05-01
The Lachman test is clinically considered to be a reliable physical examination for anterior cruciate ligament (ACL) deficiency. However, the test involves subjective judgement of differences in tibial translation and endpoint quality. An auscultation system has been developed to allow assessment of the Lachman test. The knee joint sound during the Lachman test was analyzed using fast Fourier transformation. The purpose of the present study was to quantitatively evaluate knee joint sounds in healthy and ACL-deficient human knees. Sixty healthy volunteers and 24 patients with ACL injury were examined. The Lachman test with joint auscultation was evaluated using a microphone. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of the Lachman sound, the peak sound (Lachman peak sound) as the maximum relative amplitude (acoustic pressure) and its frequency were used. In healthy volunteers, the mean Lachman peak sound of intact knees was 100.6 Hz in frequency and -45 dB in acoustic pressure. Moreover, a sex difference was found in the frequency of the Lachman peak sound. In patients with ACL injury, the frequency of the Lachman peak sound of the ACL-deficient knees was widely dispersed. In the ACL-deficient knees, the mean Lachman peak sound was 306.8 Hz in frequency and -63.1 dB in acoustic pressure. If the reference range was set at the frequency of the healthy volunteer Lachman peak sound, the sensitivity, specificity, positive predictive value, and negative predictive value were 83.3%, 95.6%, 95.2%, and 85.2%, respectively. Knee joint auscultation during the Lachman test was capable of judging ACL deficiency on the basis of objective data. In particular, the frequency of the Lachman peak sound was able to assess ACL condition. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
Schouten, Ben; Troje, Nikolaus F.; Vroomen, Jean; Verfaillie, Karl
2011-01-01
Background The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps. Methodology/Principal Findings In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds. Conclusions/Significance The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws. PMID:21373181
Plastic modes of listening: affordance in constructed sound environments
NASA Astrophysics Data System (ADS)
Sjolin, Anders
This thesis is concerned with how the ecological approach to perception with the inclusion of listening modes, informs the creation of sound art installation, or more specifically as referred to in this thesis as constructed sound environments. The basis for the thesis has been a practiced based research where the aim and purpose of the written part of this PhD project has been to critically investigate the area of sound art, in order to map various approaches towards participating in and listening to a constructed sound environment. The main areas has been the notion of affordance as coined by James J. Gibson (1986), listening modes as coined by Pierre Schaeffer (1966) and further developed by Michel Chion (1994), aural architects as coined by Blesser and Salter (2007) and the holistic approach towards understanding sound art developed by Brandon LaBelle (2006). The findings within the written part of the thesis, based on a qualitative analysis, have informed the practice that has resulted in artefacts in the form of seven constructed sound environments that also functions as case studies for further analysis. The aim of the practice has been to exemplify the methodology, strategy and progress behind the organisation and construction of sound environments The research concerns point towards the acknowledgment of affordance as the crucial factor in understanding a constructed sound environment. The affordance approach govern the idea that perceiving a sound environment is a top-down process where the autonomic quality of a constructed sound environment is based upon the perception of structures of the sound material and its relationship with speaker placement and surrounding space. This enables a researcher to side step the conflicting poles of musical/abstract and non-musical/realistic classification of sound elements and regard these poles as included, not separated elements in the analysis of a constructed sound environment.
2012-01-01
Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s) with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms), and never exceeded a few dozen milliseconds (18 ± 11 ms). Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of the gonads. PMID:23217241
NASA Astrophysics Data System (ADS)
Ishikawa, K.; Yatabe, K.; Ikeda, Y.; Oikawa, Y.; Onuma, T.; Niwa, H.; Yoshii, M.
2017-02-01
Imaging of sound aids the understanding of the acoustical phenomena such as propagation, reflection, and diffraction, which is strongly required for various acoustical applications. The imaging of sound is commonly done by using a microphone array, whereas optical methods have recently been interested due to its contactless nature. The optical measurement of sound utilizes the phase modulation of light caused by sound. Since light propagated through a sound field changes its phase as proportional to the sound pressure, optical phase measurement technique can be used for the sound measurement. Several methods including laser Doppler vibrometry and Schlieren method have been proposed for that purpose. However, the sensitivities of the methods become lower as a frequency of sound decreases. In contrast, since the sensitivities of the phase-shifting technique do not depend on the frequencies of sounds, that technique is suitable for the imaging of sounds in the low-frequency range. The principle of imaging of sound using parallel phase-shifting interferometry was reported by the authors (K. Ishikawa et al., Optics Express, 2016). The measurement system consists of a high-speed polarization camera made by Photron Ltd., and a polarization interferometer. This paper reviews the principle briefly and demonstrates the high-speed imaging of acoustical phenomena. The results suggest that the proposed system can be applied to various industrial problems in acoustical engineering.
Accuracy of assessing the level of impulse sound from distant sources.
Wszołek, Tadeusz; Kłaczyński, Maciej
2007-01-01
Impulse sound events are characterised by ultra high pressures and low frequencies. Lower frequency sounds are generally less attenuated over a given distance in the atmosphere than higher frequencies. Thus, impulse sounds can be heard over greater distances and will be more affected by the environment. To calculate a long-term average immission level it is necessary to apply weighting factors like the probability of the occurrence of each weather condition during the relevant time period. This means that when measuring impulse noise at a long distance it is necessary to follow environmental parameters in many points along the way sound travels and also to have a database of sound transfer functions in the long term. The paper analyses the uncertainty of immission measurement results of impulse sound from cladding and destroying explosive materials. The influence of environmental conditions on the way sound travels is the focus of this paper.
Characteristic sounds facilitate visual search
Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2009-01-01
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253
The Development of Infants’ use of Property-poor Sounds to Individuate Objects
Wilcox, Teresa; Smith, Tracy R.
2010-01-01
There is evidence that infants as young as 4.5 months use property-rich but not property-poor sounds as the basis for individuating objects (Wilcox et al., 2006). The current research sought to identify the age at which infants demonstrate the capacity to use property-poor sounds. Using the task of Wilcox et al., infants aged 7 and 9 months were tested. The results revealed that 9- but not 7-month-olds demonstrated sensitivity to property-poor sounds (electronic tones) in an object individuation task. Additional results confirmed that the younger infants were sensitive to property-rich sounds (rattle sounds). These are the first positive results obtained with property-poor sounds in infants and lay the foundation for future research to identify the underlying basis for the developmental hierarchy favoring property-rich over property-poor sounds and possible mechanisms for change. PMID:20701977
NASA Astrophysics Data System (ADS)
Zeng, Xi; Mizuno, Yosuke; Nakamura, Kentaro
2017-12-01
The sound intensity vector provides useful information on the state of an ultrasonic field in water, since sound intensity is a vector quantity expressing the direction and magnitude of the sound field. In the previous studies on sound intensity measurement in water, conventional piezoelectric sensors and metal cables were used, and the transmission distance was limited. A new configuration of a sound intensity probe suitable for ultrasonic measurement in water is proposed and constructed for trial in this study. The probe consists of light-emitting diodes and piezoelectric elements, and the output signals are transmitted through fiber optic cables as intensity-modulated light. Sound intensity measurements of a 26 kHz ultrasonic field in water are demonstrated. The difference in the intensity vector state between the water tank with and without sound-absorbing material on its walls was successfully observed.
Airborne sound transmission loss characteristics of wood-frame construction
NASA Astrophysics Data System (ADS)
Rudder, F. F., Jr.
1985-03-01
This report summarizes the available data on the airborne sound transmission loss properties of wood-frame construction and evaluates the methods for predicting the airborne sound transmission loss. The first part of the report comprises a summary of sound transmission loss data for wood-frame interior walls and floor-ceiling construction. Data bases describing the sound transmission loss characteristics of other building components, such as windows and doors, are discussed. The second part of the report presents the prediction of the sound transmission loss of wood-frame construction. Appropriate calculation methods are described both for single-panel and for double-panel construction with sound absorption material in the cavity. With available methods, single-panel construction and double-panel construction with the panels connected by studs may be adequately characterized. Technical appendices are included that summarize laboratory measurements, compare measurement with theory, describe details of the prediction methods, and present sound transmission loss data for common building materials.
Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.
2017-04-01
The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.
NEW THORACIC MURMURS, WITH TWO NEW INSTRUMENTS, THE REFRACTOSCOPE AND THE PARTIAL STETHOSCOPE
Parker, Frederick D.
1918-01-01
1. An understanding of the physics of sound is essential for a better comprehension of refined auscultation, tone analysis, and the use of these instruments. 2. The detection of variations of the third heart sound should prove a valuable aid in predicting mitral disease. 3. The variations of the outflow sound should prove a valuable aid in determining early aortic lesions with the type of accompanying intimal changes. 4. The character of chamber timbre as distinct from loudness heard as the first and second heart sounds denotes more often the condition of heart muscle, and must not be confounded with valvular disease. 5. The full significance of sound shadows is uncertain. Cardiac sound shadows appear normally in the right axilla and below the left clavicle. Their mode of production is quite clear. 6. Both the third heart sound and the outflow sound may be heard with the ordinary stethoscope. PMID:19868281
Dimensions of vehicle sounds perception.
Wagner, Verena; Kallus, K Wolfgang; Foehl, Ulrich
2017-10-01
Vehicle sounds play an important role concerning customer satisfaction and can show another differentiating factor of brands. With an online survey of 1762 German and American customers, the requirement characteristics of high-quality vehicle sounds were determined. On the basis of these characteristics, a requirement profile was generated for every analyzed sound. These profiles were investigated in a second study with 78 customers using real vehicles. The assessment results of the vehicle sounds can be represented using the dimensions "timbre", "loudness", and "roughness/sharpness". The comparison of the requirement profiles and the assessment results show that the sounds which are perceived as pleasant and high-quality, more often correspond to the requirement profile. High-quality sounds are characterized by the fact that they are rather gentle, soft and reserved, rich, a bit dark and not too rough. For those sounds which are assessed worse by the customers, recommendations for improvements can be derived. Copyright © 2017 Elsevier Ltd. All rights reserved.
Presystolic tricuspid valve closure: an alternative mechanism of diastolic sound genesis.
Lee, C H; Xiao, H B; Gibson, D G
1990-01-01
We describe a previously unrecognised cause of an added diastolic heart sound. The patient had first-degree heart block and diastolic tricuspid regurgitation, leading to presystolic closure of the tricuspid valve and the production of a loud diastolic sound. Unlike previously described mechanisms for diastolic sounds, this sound was generated by the sudden acceleration of retrograde AV flow in late diastole.
NASA Astrophysics Data System (ADS)
Mironov, M. A.
2011-11-01
A method of allowing for the spatial sound field structure in designing the sound-absorbing structures for turbojet aircraft engine ducts is proposed. The acoustic impedance of a duct should be chosen so as to prevent the reflection of the primary sound field, which is generated by the sound source in the absence of the duct, from the duct walls.
Assessment of sound quality perception in cochlear implant users during music listening.
Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J
2012-04-01
Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this reduced sound quality. Although the effects of bass frequency removal were studied here, we advocate CI-MUSHRA as a user-friendly and versatile research tool to measure the effects of a wide range of acoustic manipulations on sound quality perception in CI users.
Characterizing the 3-D atmosphere with NUCAPS sounding products from multiple platforms
NASA Astrophysics Data System (ADS)
Barnet, C. D.; Smith, N.; Gambacorta, A.; Wheeler, A. A.; Sjoberg, W.; Goldberg, M.
2017-12-01
The JPSS Proving Ground and Risk Reduction (PGRR) Program launched the Sounding Initiative in 2014 to develop operational applications that use 3-D satellite soundings. These are near global daily swaths of vertical atmospheric profiles of temperature, moisture and trace gas species. When high vertical resolution satellite soundings first became available, their assimilation into user applications was slow: forecasters familiar with 2-D satellite imagery or 1-D radiosondes did not have the technical capability nor product knowledge to readily ingest satellite soundings. Similarly, the satellite sounding developer community lacked wherewithal to understand the many challenges forecasters face in their real time decision-making. It took the PGRR Sounding Initiative to bring these two communities together and develop novel applications that now depend on NUCAPS soundings. NUCAPS - the NOAA Unique Combined Atmospheric Processing System - is platform agnostic and generates satellite soundings from measurements made by infrared and microwave sounder pairs on the MetOp (IASI/AMSU) and Suomi NPP (CrIS/ATMS) polar-orbiting platforms. We highlight here three new applications developed under the PGRR Sounding Initiative. They are, (i) aviation: NUCAPS identifies cold air "blobs" that causes jet fuel to freeze, (ii) severe weather: NUCAPS identifies areas of convective initiation, and (iii) air quality: NUCAPS identifies stratospheric intrusions and tracks long-range transport of biomass burning plumes. The value of NUCAPS being platform agnostic will become apparent with the JPSS-1 launch. NUCAPS soundings from Suomi NPP and JPSS-1, being 50 min apart, could capture fast-changing weather events and together with NUCAPS soundings from the two MetOp platforms ( 4 hours earlier in the day than JPSS) could characterize diurnal cycles. In this paper, we will summarize key accomplishments and assess whether NUCAPS maintains enough continuity in its sounding products from multiple platforms to sufficiently characterize atmospheric evolution at localized scales. With this we will address one of the primary data requirements that emerged in the Sounding Initiative, namely the need for a time sequence of satellite sounding products.
Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.
2013-01-01
Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278
Harris, Debra D
2015-01-01
Three flooring materials, terrazzo, rubber, and carpet tile, in patient unit corridors were compared for absorption of sound, comfort, light reflectance, employee perceptions and preferences, and patient satisfaction. Environmental stressors, such as noise and ergonomic factors, effect healthcare workers and patients, contributing to increased fatigue, anxiety and stress, decreased productivity, and patient safety and satisfaction. A longitudinal comparative cohort study comparing three types of flooring assessed sound levels, healthcare worker responses, and patient Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) ratings over 42 weeks. A linear mixed model analysis was conducted to determine significant differences between the means for participant responses and objective sound meter data during all three phases of the study. A significant difference was found for sound levels between flooring type for equivalent continuous sound levels. Carpet tile performed better for sound attenuation by absorption, reducing sound levels 3.14 dBA. Preferences for flooring materials changed over the course of the study. The HCAHPS ratings aligned with the sound meter data showing that patients perceived the noise levels to be lower with carpet tiles, improving patient satisfaction ratings. Perceptions for healthcare staff and patients were aligned with the sound meter data. Carpet tile provides sound absorption that affects sound levels and influences occupant's perceptions of environmental factors that contribute to the quality of the indoor environment. Flooring that provides comfort underfoot, easy cleanability, and sound absorption influence healthcare worker job satisfaction and patient satisfaction with their patient experience. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Zuo, Zhifeng; Maekawa, Hiroshi
2014-02-01
The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.
L-type calcium channels refine the neural population code of sound level
Grimsley, Calum Alex; Green, David Brian
2016-01-01
The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536
An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air
NASA Astrophysics Data System (ADS)
Papacosta, Pangratios; Linscheid, Nathan
2016-01-01
Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the displacement antinodes enables the measurement of the wavelength of the sound that is being used. This paper describes a design that uses a speaker instead of the traditional aluminum rod as the sound source. This allows the use of multiple sound frequencies that yield a much more accurate speed of sound in air.
Real time speech formant analyzer and display
Holland, George E.; Struve, Walter S.; Homer, John F.
1987-01-01
A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user.
Real time speech formant analyzer and display
Holland, G.E.; Struve, W.S.; Homer, J.F.
1987-02-03
A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user. 19 figs.
Ladich, Friedrich
2014-10-01
Bony fishes have evolved a diversity of sound generating mechanisms and produce a variety of sounds. By contrast to sound generating mechanisms, which are lacking in several taxa, all fish species possess inner ears for sound detection. Fishes may also have various accessory structures such as auditory ossicles to improve hearing. The distribution of sound generating mechanisms and accessory hearing structures among fishes indicates that acoustic communication was not the driving force in their evolution. It is proposed here that different constraints influenced hearing and sound production during fish evolution, namely certain life history traits (territoriality, mate attraction) in the case of sound generating mechanisms, and adaptation to different soundscapes (ambient noise conditions) in accessory hearing structures (Ecoacoustical constraints hypothesis). Copyright © 2014 Elsevier Ltd. All rights reserved.
75 FR 76079 - Sound Incentive Compensation Guidance
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-07
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Sound Incentive Compensation Guidance... on the following information collection. Title of Proposal: Sound Incentive Compensation Guidance... Sound Compensation Practices adopted by the Financial Stability Board (FSB) in April 2009, as well as...
A description of externally recorded womb sounds in human subjects during gestation
Daland, Robert; Kesavan, Kalpashri; Macey, Paul M.; Zeltzer, Lonnie; Harper, Ronald M.
2018-01-01
Objective Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Study design Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Results Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500–5,000 Hz) and mid-frequency (100–500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10–100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. Conclusions High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU. PMID:29746604
A description of externally recorded womb sounds in human subjects during gestation.
Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M
2018-01-01
Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU.
Analysis of sound pressure levels emitted by children's toys.
Sleifer, Pricila; Gonçalves, Maiara Santos; Tomasi, Marinês; Gomes, Erissandra
2013-06-01
To verify the levels of sound pressure emitted by non-certified children's toys. Cross-sectional study of sound toys available at popular retail stores of the so-called informal sector. Electronic, mechanical, and musical toys were analyzed. The measurement of each product was carried out by an acoustic engineer in an acoustically isolated booth, by a decibel meter. To obtain the sound parameters of intensity and frequency, the toys were set to produce sounds at a distance of 10 and 50cm from the researcher's ear. The intensity of sound pressure [dB(A)] and the frequency in hertz (Hz) were measured. 48 toys were evaluated. The mean sound pressure 10cm from the ear was 102±10 dB(A), and at 50cm, 94±8 dB(A), with p<0.05. The level of sound pressure emitted by the majority of toys was above 85dB(A). The frequency ranged from 413 to 6,635Hz, with 56.3% of toys emitting frequency higher than 2,000Hz. The majority of toys assessed in this research emitted a high level of sound pressure.
Stridulatory sound-production and its function in females of the cicada Subpsaltria yangi.
Luo, Changqing; Wei, Cong
2015-01-01
Acoustic behavior plays a crucial role in many aspects of cicada biology, such as reproduction and intrasexual competition. Although female sound production has been reported in some cicada species, acoustic behavior of female cicadas has received little attention. In cicada Subpsaltria yangi, the females possess a pair of unusually well-developed stridulatory organs. Here, sound production and its function in females of this remarkable cicada species were investigated. We revealed that the females could produce sounds by stridulatory mechanism during pair formation, and the sounds were able to elicit both acoustic and phonotactic responses from males. In addition, the forewings would strike the body during performing stridulatory sound-producing movements, which generated impact sounds. Acoustic playback experiments indicated that the impact sounds played no role in the behavioral context of pair formation. This study provides the first experimental evidence that females of a cicada species can generate sounds by stridulatory mechanism. We anticipate that our results will promote acoustic studies on females of other cicada species which also possess stridulatory system.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.
Active localization of virtual sounds
NASA Technical Reports Server (NTRS)
Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.
1991-01-01
We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.
Neuro-cognitive aspects of "OM" sound/syllable perception: A functional neuroimaging study.
Kumar, Uttam; Guleria, Anupam; Khetrapal, Chunni Lal
2015-01-01
The sound "OM" is believed to bring mental peace and calm. The cortical activation associated with listening to sound "OM" in contrast to similar non-meaningful sound (TOM) and listening to a meaningful Hindi word (AAM) has been investigated using functional magnetic resonance imaging (MRI). The behaviour interleaved gradient technique was employed in order to avoid interference of scanner noise. The results reveal that listening to "OM" sound in contrast to the meaningful Hindi word condition activates areas of bilateral cerebellum, left middle frontal gyrus (dorsolateral middle frontal/BA 9), right precuneus (BA 5) and right supramarginal gyrus (SMG). Listening to "OM" sound in contrast to "non-meaningful" sound condition leads to cortical activation in bilateral middle frontal (BA9), right middle temporal (BA37), right angular gyrus (BA 40), right SMG and right superior middle frontal gyrus (BA 8). The conjunction analysis reveals that the common neural regions activated in listening to "OM" sound during both conditions are middle frontal (left dorsolateral middle frontal cortex) and right SMG. The results correspond to the fact that listening to "OM" sound recruits neural systems implicated in emotional empathy.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Vermeulen, L; Van de Perre, V; Permentier, L; De Bie, S; Verbeke, G; Geers, R
2016-06-01
This study investigates the relationship between sound levels, pre-slaughter handling during loading and pork quality. Pre-slaughter variables were investigated from loading till slaughter. A total of 3213 pigs were measured 30 min post-mortem for pH(30LT) (M. Longissimus thoracis). First, a sound level model for the risk to develop PSE meat was established. The difference in maximum and mean sound level during loading, mean sound level during lairage and mean sound level prior to stunning remained significant within the model. This indicated that sound levels during loading had a significant added value to former sound models. Moreover, this study completed the global classification checklist (Vermeulen et al., 2015a) by developing a linear mixed model for pH(30LT) and PSE prevalence, with the difference in maximum and mean sound level measured during loading, the feed withdrawal period and the difference in temperature during loading and lairage. Hence, this study provided new insights over previous research where loading procedures were not included. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploring positive hospital ward soundscape interventions.
Mackrill, J; Jennings, P; Cain, R
2014-11-01
Sound is often considered as a negative aspect of an environment that needs mitigating, particularly in hospitals. It is worthwhile however, to consider how subjective responses to hospital sounds can be made more positive. The authors identified natural sound, steady state sound and written sound source information as having the potential to do this. Listening evaluations were conducted with 24 participants who rated their emotional (Relaxation) and cognitive (Interest and Understanding) response to a variety of hospital ward soundscape clips across these three interventions. A repeated measures ANOVA revealed that the 'Relaxation' response was significantly affected (n(2) = 0.05, p = 0.001) by the interventions with natural sound producing a 10.1% more positive response. Most interestingly, written sound source information produced a 4.7% positive change in response. The authors conclude that exploring different ways to improve the sounds of a hospital offers subjective benefits that move beyond sound level reduction. This is an area for future work to focus upon in an effort to achieve more positively experienced hospital soundscapes and environments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Georgia Basin-Puget Sound Airshed Characterization Report 2014
The Georgia Basin - Puget Sound Airshed Characterization Report, 2012 was undertaken to characterize the air quality within the Georgia Basin/Puget Sound region,a vibrant, rapidly growing, urbanized area of the Pacific Northwest. The Georgia Basin - Puget Sound Airshed Characteri...
ERIC Educational Resources Information Center
Amrani, D.
2013-01-01
This paper deals with the comparison of sound speed measurements in air using two types of sensor that are widely employed in physics and engineering education, namely a pressure sensor and a sound sensor. A computer-based laboratory with pressure and sound sensors was used to carry out measurements of air through a 60 ml syringe. The fast Fourier…
Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts
ERIC Educational Resources Information Center
Delalande, Francois; Cornara, Silvia
2010-01-01
One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…
Series expansions of rotating two and three dimensional sound fields.
Poletti, M A
2010-12-01
The cylindrical and spherical harmonic expansions of oscillating sound fields rotating at a constant rate are derived. These expansions are a generalized form of the stationary sound field expansions. The derivations are based on the representation of interior and exterior sound fields using the simple source approach and determination of the simple source solutions with uniform rotation. Numerical simulations of rotating sound fields are presented to verify the theory.
Monitoring the Ocean Using High Frequency Ambient Sound
2008-10-01
even identify specific groups within the resident killer whale type ( Puget Sound Southern Resident pods J, K and L) because these groups have...particular, the different populations of killer whales in the NE Pacific Ocean. This has been accomplished by detecting transient sounds with short...high sea state (the sound of spray), general shipping - close and distant, clanking and whale calls and clicking. These sound sources form the basis
NASA Technical Reports Server (NTRS)
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1986-01-01
The validity of the room equation of Crocker and Price (1982) for predicting the cabin interior sound pressure level was experimentally tested using a specially constructed setup for simultaneous measurements of transmitted sound intensity and interior sound pressure levels. Using measured values of the reverberation time and transmitted intensities, the equation was used to predict the space-averaged interior sound pressure level for three different fuselage conditions. The general agreement between the room equation and experimental test data is considered good enough for this equation to be used for preliminary design studies.
Prediction of light aircraft interior sound pressure level using the room equation
NASA Technical Reports Server (NTRS)
Atwal, M.; Bernhard, R.
1984-01-01
The room equation is investigated for predicting interior sound level. The method makes use of an acoustic power balance, by equating net power flow into the cabin volume to power dissipated within the cabin using the room equation. The sound power level transmitted through the panels was calculated by multiplying the measured space averaged transmitted intensity for each panel by its surface area. The sound pressure level was obtained by summing the mean square sound pressures radiated from each panel. The data obtained supported the room equation model in predicting the cabin interior sound pressure level.
Toward Inverse Control of Physics-Based Sound Synthesis
NASA Astrophysics Data System (ADS)
Pfalz, A.; Berdahl, E.
2017-05-01
Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.
AVE-SESAME IV: 25 mb sounding data
NASA Technical Reports Server (NTRS)
Sienkiewicz, M. E.; Gilchrist, L. P.; Turner, R. E.
1980-01-01
The rawinsonde sounding program for the AVE-SESAME 4 experiment is descirbed and tabulated data at 25 mb for the 23 National Weather Service and 20 special stations participating in the experiment are presented. Soundings were taken at 3 hr intervals beginning at 1200 GMT on May 9, 1979, and ending at 1200 GMT on May 10, 1979 (nine sounding times). The method of processing is discussed, estimates of the rms errors in the data are presented, and an example of contact data is given. Reasons are given for the termination of soundings below 100 mb, and soundings are listed which exhibit abnormal characteristics.
Kastelein, Ronald A; van Heerden, Dorianne; Gransier, Robin; Hoek, Lean
2013-12-01
The high under-water sound pressure levels (SPLs) produced during pile driving to build offshore wind turbines may affect harbor porpoises. To estimate the discomfort threshold of pile driving sounds, a porpoise in a quiet pool was exposed to playbacks (46 strikes/min) at five SPLs (6 dB steps: 130-154 dB re 1 μPa). The spectrum of the impulsive sound resembled the spectrum of pile driving sound at tens of kilometers from the pile driving location in shallow water such as that found in the North Sea. The animal's behavior during test and baseline periods was compared. At and above a received broadband SPL of 136 dB re 1 μPa [zero-peak sound pressure level: 151 dB re 1 μPa; t90: 126 ms; sound exposure level of a single strike (SELss): 127 dB re 1 μPa(2) s] the porpoise's respiration rate increased in response to the pile driving sounds. At higher levels, he also jumped out of the water more often. Wild porpoises are expected to move tens of kilometers away from offshore pile driving locations; response distances will vary with context, the sounds' source level, parameters influencing sound propagation, and background noise levels. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Seybert, A. F.; Wu, X. F.; Oswald, Fred B.
1992-01-01
Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
An auditory analog of the picture superiority effect.
Crutcher, Robert J; Beer, Jenay M
2011-01-01
Previous research has found that pictures (e.g., a picture of an elephant) are remembered better than words (e.g., the word "elephant"), an empirical finding called the picture superiority effect (Paivio & Csapo. Cognitive Psychology 5(2):176-206, 1973). However, very little research has investigated such memory differences for other types of sensory stimuli (e.g. sounds or odors) and their verbal labels. Four experiments compared recall of environmental sounds (e.g., ringing) and spoken verbal labels of those sounds (e.g., "ringing"). In contrast to earlier studies that have shown no difference in recall of sounds and spoken verbal labels (Philipchalk & Rowe. Journal of Experimental Psychology 91(2):341-343, 1971; Paivio, Philipchalk, & Rowe. Memory & Cognition 3(6):586-590, 1975), the experiments reported here yielded clear evidence for an auditory analog of the picture superiority effect. Experiments 1 and 2 showed that sounds were recalled better than the verbal labels of those sounds. Experiment 2 also showed that verbal labels are recalled as well as sounds when participants imagine the sound that the word labels. Experiments 3 and 4 extended these findings to incidental-processing task paradigms and showed that the advantage of sounds over words is enhanced when participants are induced to label the sounds.
Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang
2012-01-01
Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.
Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang
2012-01-01
Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928
Pectoral sound generation in the blue catfish Ictalurus furcatus.
Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L
2015-03-01
Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.
Mooney, T Aran; Samson, Julia E; Schlunk, Andrea D; Zacarias, Samantha
2016-07-01
Sound is an abundant cue in the marine environment, yet we know little regarding the frequency range and levels which induce behavioral responses in ecologically key marine invertebrates. Here we address the range of sounds that elicit unconditioned behavioral responses in squid Doryteuthis pealeii, the types of responses generated, and how responses change over multiple sound exposures. A variety of response types were evoked, from inking and jetting to body pattern changes and fin movements. Squid responded to sounds from 80 to 1000 Hz, with response rates diminishing at the higher and lower ends of this frequency range. Animals responded to the lowest sound levels in the 200-400 Hz range. Inking, an escape response, was confined to the lower frequencies and highest sound levels; jetting was more widespread. Response latencies were variable but typically occurred after 0.36 s (mean) for jetting and 0.14 s for body pattern changes; pattern changes occurred significantly faster. These results demonstrate that squid can exhibit a range of behavioral responses to sound include fleeing, deimatic and protean behaviors, all of which are associated with predator evasion. Response types were frequency and sound level dependent, reflecting a relative loudness concept to sound perception in squid.
Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.
A New Mechanism of Sound Generation in Songbirds
NASA Astrophysics Data System (ADS)
Goller, Franz; Larsen, Ole N.
1997-12-01
Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.
NASA Astrophysics Data System (ADS)
Lau, S. F.; Zainulabidin, M. H.; Yahya, M. N.; Zaman, I.; Azmir, N. A.; Madlan, M. A.; Ismon, M.; Kasron, M. Z.; Ismail, A. E.
2017-10-01
Giving a room proper acoustic treatment is both art and science. Acoustic design brings comfort in the built environment and reduces noise level by using sound absorbers. There is a need to give a room acoustic treatment by installing absorbers in order to decrease the reverberant sound. However, they are usually high in price which cost much for installation and there is no system to locate the optimum number and placement of sound absorbers. It would be a waste if the room is overly treated with absorbers or cause improper treatment if the room is treated with insufficient absorbers. This study aims to determine the amount of sound absorbers needed and optimum location of sound absorbers placement in order to reduce the overall sound pressure level in specified room by using ANSYS APDL software. The size of sound absorbers needed is found to be 11 m 2 by using Sabine equation and different unit sets of absorbers are applied on walls, each with the same total areas to investigate the best configurations. All three sets (single absorber, 11 absorbers and 44 absorbers) has successfully treating the room by reducing the overall sound pressure level. The greatest reduction in overall sound pressure level is that of 44 absorbers evenly distributed around the walls, which has reduced as much as 24.2 dB and the least effective configuration is single absorber whereby it has reduced the overall sound pressure level by 18.4 dB.
Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-03-01
High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.
Sound levels and their effects on children in a German primary school.
Eysel-Gosepath, Katrin; Daut, Tobias; Pinger, Andreas; Lehmacher, Walter; Erren, Thomas
2012-12-01
Considerable sound levels are produced in primary schools by voices of children and resonance effects. As a consequence, hearing loss and mental impairment may occur. In a Cologne primary school, sound levels were measured in three different classrooms, each with 24 children, 8-10 years old, and one teacher. Sound dosimeters were positioned in the room and near the teacher's ear. Additional measurements were done in one classroom fully equipped with sound-absorbing materials. A questionnaire containing 12 questions about noise at school was distributed to 100 children, 8-10 years old. Measurements were repeated after children had been taught about noise damage and while "noise lights" were used. Mean sound levels of 5-h per day measuring period were 78 dB (A) near the teacher's ear and 70 dB (A) in the room. The average of all measured maximal sound levels for 1 s was 105 dB (A) for teachers, and 100 dB (A) for rooms. In the soundproofed classroom, Leq was 66 dB (A). The questionnaire revealed certain judgment of the children concerning situations with high sound levels and their ability to develop ideas for noise reduction. However, no clear sound level reduction was identified after noise education and using "noise lights" during lessons. Children and their teachers are equally exposed to high sound levels at school. Early sensitization to noise and the possible installation of sound-absorbing materials can be important means to prevent noise-associated hearing loss and mental impairment.
Zheng, Y.
2013-01-01
Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues. PMID:23636724
Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-01-01
Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706
Auditory Localization: An Annotated Bibliography
1983-11-01
tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical
76 FR 44893 - Prince William Sound Resource Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... DEPARTMENT OF AGRICULTURE Forest Service Prince William Sound Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Prince William Sound Resource Advisory... District, 145 Forest Station Road, Girdwood, AK; Prince Willam Sound Community College, 303 Lowe Street...
77 FR 45331 - Prince William Sound Resource Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-31
... DEPARTMENT OF AGRICULTURE Forest Service Prince William Sound Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Prince William Sound Resource Advisory... Prince William Sound Resource Advisory Committee (RAC) will be discussing and voting on proposals that...
76 FR 1130 - Prince William Sound Resource Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... DEPARTMENT OF AGRICULTURE Forest Service Prince William Sound Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Prince William Sound Resource Advisory... conducted: The Prince William Sound Resource Advisory Committee (RAC) will be discussing and voting on...
NASA Astrophysics Data System (ADS)
Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.
2012-07-01
Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.
Sapienza, C M; Crandell, C C; Curtis, B
1999-09-01
Voice problems are a frequent difficulty that teachers experience. Common complaints by teachers include vocal fatigue and hoarseness. One possible explanation for these symptoms is prolonged elevations in vocal loudness within the classroom. This investigation examined the effectiveness of sound-field frequency modulation (FM) amplification on reducing the sound pressure level (SPL) of the teacher's voice during classroom instruction. Specifically, SPL was examined during speech produced in a classroom lecture by 10 teachers with and without the use of sound-field amplification. Results indicated a significant 2.42-dB decrease in SPL with the use of sound-field FM amplification. These data support the use of sound-field amplification in the vocal hygiene regimen recommended to teachers by speech-language pathologists.
Monitoring the state of the human airways by analysis of respiratory sound
NASA Technical Reports Server (NTRS)
Hardin, J. C.; Patterson, J. L., Jr.
1978-01-01
A mechanism whereby sound is generated by the motion of vortices in the human lung is described. This mechanism is believed to be responsible for most of the sound which is generated both on inspiration and expiration in normal lungs. Mathematical expressions for the frequencies of sound generated, which depend only upon the axial flow velocity and diameters of the bronchi, are derived. This theory allows the location within the bronchial tree from which particular sounds emanate to be determined. Redistribution of pulmonary blood volume following transition from earth gravity to the weightless state probably alters the caliber of certain airways and doubtless alters sound transmission properties of the lung. We believe that these changes can be monitored effectively and non-invasively by spectral analysis of pulmonary sound.
Monitoring the state of the human airways by analysis of respiratory sound
NASA Technical Reports Server (NTRS)
Hardin, J. C.; Patterson, J. L. Jr
1979-01-01
A mechanism whereby sound is generated by the motion of vortices in the human lung is described. This mechanism is believed to be responsible for most of the sound which is generated both on inspiration and expiration in normal lungs. Mathematical expressions for the frequencies of sound generated, which depend only upon the axial flow velocity and diameters of the bronchi, are derived. This theory allows the location within the bronchial tree from which particular sounds emanate to be determined. Redistribution of pulmonary blood volume following transition from Earth gravity to the weightless state probably alters the caliber of certain airways and doubtless alters sound transmission properties of the lung. We believe that these changes can be monitored effectively and non-invasively by spectral analysis of pulmonary sound.
NASA Technical Reports Server (NTRS)
Embleton, Tony F. W.; Daigle, Gilles A.
1991-01-01
Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
Early sound symbolism for vowel sounds.
Spector, Ferrinne; Maurer, Daphne
2013-01-01
Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound-shape mapping. In this study, we investigated the influence of vowels on sound-shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded-jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape.
Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae
Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063
Oyster Larvae Settle in Response to Habitat-Associated Underwater Sounds
Lillis, Ashlee; Eggleston, David B.; Bohnenstiehl, DelWayne R.
2013-01-01
Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5–20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving settlement and recruitment patterns in marine communities. PMID:24205381
Oyster larvae settle in response to habitat-associated underwater sounds.
Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R
2013-01-01
Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving settlement and recruitment patterns in marine communities.
L-type calcium channels refine the neural population code of sound level.
Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana
2016-12-01
The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.
Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.
Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.
The effect of a crunchy pseudo-chewing sound on perceived texture of softened foods.
Endo, Hiroshi; Ino, Shuichi; Fujisaki, Waka
2016-12-01
Elderly individuals whose ability to chew and swallow has declined are often restricted to unpleasant diets of very soft food, leading to a poor appetite. To address this problem, we aimed to investigate the influence of altered auditory input of chewing sounds on the perception of food texture. The modified chewing sound was reported to influence the perception of food texture in normal foods. We investigated whether the perceived sensations of nursing care foods could be altered by providing altered auditory feedback of chewing sounds, even if the actual food texture is dull. Chewing sounds were generated using electromyogram (EMG) of the masseter. When the frequency properties of the EMG signal are modified and it is heard as a sound, it resembles a "crunchy" sound, much like that emitted by chewing, for example, root vegetables (EMG chewing sound). Thirty healthy adults took part in the experiment. In two conditions (with/without the EMG chewing sound), participants rated the taste, texture and evoked feelings of five kinds of nursing care foods using two questionnaires. When the "crunchy" EMG chewing sound was present, participants were more likely to evaluate food as having the property of stiffness. Moreover, foods were perceived as rougher and to have a greater number of ingredients in the condition with the EMG chewing sound, and satisfaction and pleasantness were also greater. In conclusion, the "crunchy" pseudo-chewing sound could influence the perception of food texture, even if the actual "crunchy" oral sensation is lacking. Considering the effect of altered auditory feedback while chewing, we can suppose that such a tool would be a useful technique to help people on texture-modified diets to enjoy their food. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Discrimination of speech and non-speech sounds following theta-burst stimulation of the motor cortex
Rogers, Jack C.; Möttönen, Riikka; Boyles, Rowan; Watkins, Kate E.
2014-01-01
Perceiving speech engages parts of the motor system involved in speech production. The role of the motor cortex in speech perception has been demonstrated using low-frequency repetitive transcranial magnetic stimulation (rTMS) to suppress motor excitability in the lip representation and disrupt discrimination of lip-articulated speech sounds (Möttönen and Watkins, 2009). Another form of rTMS, continuous theta-burst stimulation (cTBS), can produce longer-lasting disruptive effects following a brief train of stimulation. We investigated the effects of cTBS on motor excitability and discrimination of speech and non-speech sounds. cTBS was applied for 40 s over either the hand or the lip representation of motor cortex. Motor-evoked potentials recorded from the lip and hand muscles in response to single pulses of TMS revealed no measurable change in motor excitability due to cTBS. This failure to replicate previous findings may reflect the unreliability of measurements of motor excitability related to inter-individual variability. We also measured the effects of cTBS on a listener’s ability to discriminate: (1) lip-articulated speech sounds from sounds not articulated by the lips (“ba” vs. “da”); (2) two speech sounds not articulated by the lips (“ga” vs. “da”); and (3) non-speech sounds produced by the hands (“claps” vs. “clicks”). Discrimination of lip-articulated speech sounds was impaired between 20 and 35 min after cTBS over the lip motor representation. Specifically, discrimination of across-category ba–da sounds presented with an 800-ms inter-stimulus interval was reduced to chance level performance. This effect was absent for speech sounds that do not require the lips for articulation and non-speech sounds. Stimulation over the hand motor representation did not affect discrimination of speech or non-speech sounds. These findings show that stimulation of the lip motor representation disrupts discrimination of speech sounds in an articulatory feature-specific way. PMID:25076928
[Swallowing sound signal: description in normal and laryngectomized subjects].
Morinière, S; Boiron, M; Beutter, P
2008-02-01
Recently, we described three sound components in the pharyngeal swallowing sound. The aim of the present study was to identify the origin of these components using modern techniques providing numeric, synchronized acoustic-radiological data in a normal population and in a partial supracricoid laryngectomized population (SCL group) and a total laryngectomized (TL group) population in pre- and postoperative situations. We enrolled 15 normal subjects (10 men and five women; mean age, 29.5+/-8 years), 11 patients in the SCL group (11 men; mean age, 62; range, 45-75 years), and nine patients in the TL group (three women, six men; mean age, 56; range, 39-73). An X-ray camera was connected to a video acquisition card to obtain acoustic-radiological data (2 images/s). The microphone was attached to each subject's skin overlying the lateral border of the cricoid. The subjects were asked to swallow 10 ml of a barium suspension. We performed the acoustic-radiological analysis using Visualisation and Cool Edit Pro software. Each sound component was associated with a specific position of the bolus and the moving anatomic structure. Three sound components were identified: the laryngeal ascension sound (LAS), the upper sphincter opening sound (USOS), and the laryngeal release sound (LRS). We quantified the total duration of the pharyngeal sound and its components, as well as the duration of the interval. The average duration of the normal pharyngeal sound was 690+/-162 ms and was significantly decreased in the TL group (296+/-105 ms) and increased in the SCL group (701+/-186 ms). The USOS was present in 100% of the recordings. A typical profile of the swallowing sound for each group was obtained. This study allowed us to determine the origin of the three main sound components of the pharyngeal swallowing sound with respect to movements in anatomic structures and the different positions of the bolus, and to describe the main variations induced by a partial and a total laryngectomy.
Rogers, Jack C; Möttönen, Riikka; Boyles, Rowan; Watkins, Kate E
2014-01-01
Perceiving speech engages parts of the motor system involved in speech production. The role of the motor cortex in speech perception has been demonstrated using low-frequency repetitive transcranial magnetic stimulation (rTMS) to suppress motor excitability in the lip representation and disrupt discrimination of lip-articulated speech sounds (Möttönen and Watkins, 2009). Another form of rTMS, continuous theta-burst stimulation (cTBS), can produce longer-lasting disruptive effects following a brief train of stimulation. We investigated the effects of cTBS on motor excitability and discrimination of speech and non-speech sounds. cTBS was applied for 40 s over either the hand or the lip representation of motor cortex. Motor-evoked potentials recorded from the lip and hand muscles in response to single pulses of TMS revealed no measurable change in motor excitability due to cTBS. This failure to replicate previous findings may reflect the unreliability of measurements of motor excitability related to inter-individual variability. We also measured the effects of cTBS on a listener's ability to discriminate: (1) lip-articulated speech sounds from sounds not articulated by the lips ("ba" vs. "da"); (2) two speech sounds not articulated by the lips ("ga" vs. "da"); and (3) non-speech sounds produced by the hands ("claps" vs. "clicks"). Discrimination of lip-articulated speech sounds was impaired between 20 and 35 min after cTBS over the lip motor representation. Specifically, discrimination of across-category ba-da sounds presented with an 800-ms inter-stimulus interval was reduced to chance level performance. This effect was absent for speech sounds that do not require the lips for articulation and non-speech sounds. Stimulation over the hand motor representation did not affect discrimination of speech or non-speech sounds. These findings show that stimulation of the lip motor representation disrupts discrimination of speech sounds in an articulatory feature-specific way.
Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time
Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721
Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.
75 FR 39910 - Prince William Sound Resource Advisory Committee; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-13
... DEPARTMENT OF AGRICULTURE Forest Service Prince William Sound Resource Advisory Committee; Meeting AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Prince William Sound Resource..., Anchorage, Alaska 99503. Send written comments to Prince William Sound Resource Advisory Committee, c/o USDA...
77 FR 19301 - Prince William Sound Regional Citizens' Advisory Council Charter Renewal
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-30
... DEPARTMENT OF HOMELAND SECURITY Coast Guard [USCG-2012-0099] Prince William Sound Regional... Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) as an alternative voluntary advisory group for Prince William Sound, Alaska. This certification allows the PWSRCAC to monitor the activities...
76 FR 18715 - Prince William Sound Resource Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-05
... DEPARTMENT OF AGRICULTURE Forest Service Prince William Sound Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Prince William Sound Resource Advisory... meeting is open to the public. The following business will be conducted: The Prince William Sound Resource...
33 CFR 167.1700 - In Prince William Sound: General.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) PORTS AND WATERWAYS SAFETY OFFSHORE TRAFFIC SEPARATION SCHEMES Description of Traffic Separation Schemes and Precautionary Areas Pacific West Coast § 167.1700 In Prince William Sound: General. The Prince William Sound Traffic Separation Scheme consists of four parts: Prince William Sound Traffic Separation...
Auditory-Motor Processing of Speech Sounds
Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.
2013-01-01
The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846
Steerable sound transport in a 3D acoustic network
NASA Astrophysics Data System (ADS)
Xia, Bai-Zhan; Jiao, Jun-Rui; Dai, Hong-Qing; Yin, Sheng-Wen; Zheng, Sheng-Jie; Liu, Ting-Ting; Chen, Ning; Yu, De-Jie
2017-10-01
Quasi-lossless and asymmetric sound transports, which are exceedingly desirable in various modern physical systems, are almost always based on nonlinear or angular momentum biasing effects with extremely high power levels and complex modulation schemes. A practical route for the steerable sound transport along any arbitrary acoustic pathway, especially in a three-dimensional (3D) acoustic network, can revolutionize the sound power propagation and the sound communication. Here, we design an acoustic device containing a regular-tetrahedral cavity with four cylindrical waveguides. A smaller regular-tetrahedral solid in this cavity is eccentrically emplaced to break spatial symmetry of the acoustic device. The numerical and experimental results show that the sound power flow can unimpededly transport between two waveguides away from the eccentric solid within a wide frequency range. Based on the quasi-lossless and asymmetric transport characteristic of the single acoustic device, we construct a 3D acoustic network, in which the sound power flow can flexibly propagate along arbitrary sound pathways defined by our acoustic devices with eccentrically emplaced regular-tetrahedral solids.
Aquatic Acoustic Metrics Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-12-18
Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals.more » In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less
NASA Astrophysics Data System (ADS)
di Nisi, J.; Muzet, A.; Weber, L. D.
1987-04-01
Eighty subjects of both sexes were selected according to their self-estimated high or low sensitivity to noise. Noise exposure took place during a mental task ("sound" condition) or during a video film illustrating the noises ("sound and video" condition). The experiments were conducted between 0900 and 1100 hours or between 1500 and 1700 hours. Heart rate response and finger pulse response amplitudes were averaged separately for "sound" and "sound and video" conditions. In the "sound" condition, the average amplitude of the heart rate response differed significantly between noise-sensitivity groups: the low sensitivity group showed a lower average amplitude of heart rate response than the high sensitivity group. A significant interaction between sex and time of the day (morning or afternoon) was observed in both "sound" and "sound and video" conditions. In the "sound" condition, the percentage of noises inducing a finger pulse response appeared higher in female than in male subjects.
The Specificity of Sound Symbolic Correspondences in Spoken Language.
Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L
2017-11-01
Although language has long been regarded as a primarily arbitrary system, sound symbolism, or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these mappings. This study investigated whether sound symbolic properties correspond to specific meanings, or whether these properties generalize across semantic dimensions. In three experiments, native English-speaking adults heard sound symbolic foreign words for dimensional adjective pairs (big/small, round/pointy, fast/slow, moving/still) and for each foreign word, selected a translation among English antonyms that either matched or mismatched with the correct meaning dimension. Listeners agreed more reliably on the English translation for matched relative to mismatched dimensions, though reliable cross-dimensional mappings did occur. These findings suggest that although sound symbolic properties generalize to meanings that may share overlapping semantic features, sound symbolic mappings offer semantic specificity. Copyright © 2016 Cognitive Science Society, Inc.
Artificial intelligence techniques used in respiratory sound analysis--a systematic review.
Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian
2014-02-01
Artificial intelligence (AI) has recently been established as an alternative method to many conventional methods. The implementation of AI techniques for respiratory sound analysis can assist medical professionals in the diagnosis of lung pathologies. This article highlights the importance of AI techniques in the implementation of computer-based respiratory sound analysis. Articles on computer-based respiratory sound analysis using AI techniques were identified by searches conducted on various electronic resources, such as the IEEE, Springer, Elsevier, PubMed, and ACM digital library databases. Brief descriptions of the types of respiratory sounds and their respective characteristics are provided. We then analyzed each of the previous studies to determine the specific respiratory sounds/pathology analyzed, the number of subjects, the signal processing method used, the AI techniques used, and the performance of the AI technique used in the analysis of respiratory sounds. A detailed description of each of these studies is provided. In conclusion, this article provides recommendations for further advancements in respiratory sound analysis.
Memory for product sounds: the effect of sound and label type.
Ozcan, Elif; van Egmond, René
2007-11-01
The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.
Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis
McDermott, Josh H.; Simoncelli, Eero P.
2014-01-01
Rainstorms, insect swarms, and galloping horses produce “sound textures” – the collective result of many similar acoustic events. Sound textures are distinguished by temporal homogeneity, suggesting they could be recognized with time-averaged statistics. To test this hypothesis, we processed real-world textures with an auditory model containing filters tuned for sound frequencies and their modulations, and measured statistics of the resulting decomposition. We then assessed the realism and recognizability of novel sounds synthesized to have matching statistics. Statistics of individual frequency channels, capturing spectral power and sparsity, generally failed to produce compelling synthetic textures. However, combining them with correlations between channels produced identifiable and natural-sounding textures. Synthesis quality declined if statistics were computed from biologically implausible auditory models. The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by downstream neural populations. The synthesis methodology offers a powerful tool for their further investigation. PMID:21903084
Prediction of far-field wind turbine noise propagation with parabolic equation.
Lee, Seongkyu; Lee, Dongjai; Honhoff, Saskia
2016-08-01
Sound propagation of wind farms is typically simulated by the use of engineering tools that are neglecting some atmospheric conditions and terrain effects. Wind and temperature profiles, however, can affect the propagation of sound and thus the perceived sound in the far field. A better understanding and application of those effects would allow a more optimized farm operation towards meeting noise regulations and optimizing energy yield. This paper presents the parabolic equation (PE) model development for accurate wind turbine noise propagation. The model is validated against analytic solutions for a uniform sound speed profile, benchmark problems for nonuniform sound speed profiles, and field sound test data for real environmental acoustics. It is shown that PE provides good agreement with the measured data, except upwind propagation cases in which turbulence scattering is important. Finally, the PE model uses computational fluid dynamics results as input to accurately predict sound propagation for complex flows such as wake flows. It is demonstrated that wake flows significantly modify the sound propagation characteristics.
Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System
NASA Technical Reports Server (NTRS)
Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.
2013-01-01
The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers
NASA Astrophysics Data System (ADS)
KAWAI, K.; YANO, T.
2002-02-01
This paper reports an experimental study determining the effects of the type and loudness of individual sounds on the overall impression of the sound environment. Field and laboratory experiments were carried out. In each experiment, subjects evaluated the sound environment presented, which consisted of combinations of three individual sounds of road traffic, singing crickets and the murmuring of a river, with five bipolar adjective scales such as Good-Bad, Active-Calm and Natural-Artificial. Overall loudness had the strongest effect on most types of evaluations; relative SPL has a greater effect than overall loudness on a particular evaluation of the natural-artificial scale. The test sounds in the field experiment were generally evaluated as more good and more natural than those in the laboratory. The results of comparisons between laboratory and field sounds indicate a difference in the trend between them. This difference may be explained by the term of selective listening but that needs further investigation.
Amplitude and Wavelength Measurement of Sound Waves in Free Space using a Sound Wave Phase Meter
NASA Astrophysics Data System (ADS)
Ham, Sounggil; Lee, Kiwon
2018-05-01
We developed a sound wave phase meter (SWPM) and measured the amplitude and wavelength of sound waves in free space. The SWPM consists of two parallel metal plates, where the front plate was operated as a diaphragm. An aluminum perforated plate was additionally installed in front of the diaphragm, and the same signal as that applied to the sound source was applied to the perforated plate. The SWPM measures both the sound wave signal due to the diaphragm vibration and the induction signal due to the electric field of the aluminum perforated plate. Therefore, the two measurement signals interfere with each other due to the phase difference according to the distance between the sound source and the SWPM, and the amplitude of the composite signal that is output as a result is periodically changed. We obtained the wavelength of the sound wave from this periodic amplitude change measured in the free space and compared it with the theoretically calculated values.
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
Modal sound transmission loss of a single leaf panel: Effects of inter-modal coupling.
Wang, Chong
2015-06-01
Sound transmission through a single leaf panel has mostly been discussed and explained by using the approaching wave concept, from which the well-known mass law can be derived. In this paper, the modal behavior in sound transmission coefficients is explored, and it is shown that the mutual modal radiation impedances in modal sound transmission coefficients may not be ignored even for a panel immersed in a light fluid. By introducing the equivalent modal impedance which incorporates the inter-modal coupling effect, an analytical expression for the modal sound transmission coefficient is derived, and the overall sound transmission coefficient is simply a modal superposition of modal sound transmission coefficients. A good correlation is obtained between analytical calculation and boundary element method. In addition, it is found that inter-modal coupling has noticeable effects in modal sound transmission coefficients in the subsonic region but may be ignored as modes become supersonic. It is also shown that the well-known mass law performance is attributed to all the supersonic modes.
Visualizing Sound: Demonstrations to Teach Acoustic Concepts
NASA Astrophysics Data System (ADS)
Rennoll, Valerie
Interference, a phenomenon in which two sound waves superpose to form a resultant wave of greater or lower amplitude, is a key concept when learning about the physics of sound waves. Typical interference demonstrations involve students listening for changes in sound level as they move throughout a room. Here, new tools are developed to teach this concept that provide a visual component, allowing individuals to see changes in sound level on a light display. This is accomplished using a microcontroller that analyzes sound levels collected by a microphone and displays the sound level in real-time on an LED strip. The light display is placed on a sliding rail between two speakers to show the interference occurring between two sound waves. When a long-exposure photograph is taken of the light display being slid from one end of the rail to the other, a wave of the interference pattern can be captured. By providing a visual component, these tools will help students and the general public to better understand interference, a key concept in acoustics.
NASA Astrophysics Data System (ADS)
Brickman, Jon; Tanchez, Erin; Thomas, Jeanette
2005-09-01
Diel patterns in underwater sounds from five beluga whales (Delphinapterus leucas) and five Pacific white-sided dolphins (Lagenorhynchus obliquidens) housed at John G. Shedd Aquarium in Chicago, IL were studied. Underwater sounds were sampled systematically over 24-h periods by using a battery-operated cassette recorder and an Ithaco 605C hydrophone controlled by a digital timer, which activated every hour and then shut off after 2.5 min. Belugas had 14 sounds and Pacific white-sided dolphins produced 5 sounds. For each species, the use of some sounds was correlated with other sounds. The diel pattern for both species was similar and mostly affected by the presence of humans. Sounds gradually increased after the staff and visitors arrived, peaked during the midday, gradually decreased as closing of the aquarium approached, and was minimal overnight. These data can help identify the best time of day to make recordings and perhaps could be used to examine social, reproductive, or health changes in these captive cetaceans.
Johnson, Nicholas S.; Higgs, Dennis; Binder, Thomas R.; Marsden, J. Ellen; Buchinger, Tyler John; Brege, Linnea; Bruning, Tyler; Farha, Steve A.; Krueger, Charles C.
2018-01-01
Two sounds associated with spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain were characterized by comparing sound recordings to behavioral data collected using acoustic telemetry and video. These sounds were named growls and snaps, and were heard on lake trout spawning reefs, but not on a non-spawning reef, and were more common at night than during the day. Growls also occurred more often during the spawning period than the pre-spawning period, while the trend for snaps was reversed. In a laboratory flume, sounds occurred when male lake trout were displaying spawning behaviors; growls when males were quivering and parallel swimming, and snaps when males moved their jaw. Combining our results with the observation of possible sound production by spawning splake (Salvelinus fontinalis × Salvelinus namaycush hybrid), provides rare evidence for spawning-related sound production by a salmonid, or any other fish in the superorder Protacanthopterygii. Further characterization of these sounds could be useful for lake trout assessment, restoration, and control.
Pitch features of environmental sounds
NASA Astrophysics Data System (ADS)
Yang, Ming; Kang, Jian
2016-07-01
A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.
NASA Astrophysics Data System (ADS)
O'Donnell, Michael J.; Bisnovatyi, Ilia
2000-11-01
Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer scientists with that of numerical mathematicians studying sonification, psychologists, linguists, bioacousticians, and musicians to illuminate the structure of sound from different angles. Each of these disciplines deals with the use of sound to carry a different sort of information, under different requirements and constraints. By combining their insights, we can learn to understand of the structure of sound in general.
Nachtigall, Paul E; Supin, Alexander Y
2016-01-01
Stranded whales and dolphins have sometimes been associated with loud anthropogenic sounds. Echolocating whales produce very loud sounds themselves and have developed the ability to protect their hearing from their own signals. A false killer whale's hearing sensitivity was measured when a faint warning sound was given just before the presentation of an increase in intensity to 170 dB. If the warning occurred within 1-9 s, as opposed to 20-40 s, the whale showed a 13-dB reduction in hearing sensitivity. Warning sounds before loud pulses may help mitigate the effects of loud anthropogenic sounds on wild animals.
The warm, rich sound of valve guitar amplifiers
NASA Astrophysics Data System (ADS)
Keeports, David
2017-03-01
Practical solid state diodes and transistors have made glass valve technology nearly obsolete. Nevertheless, valves survive largely because electric guitar players much prefer the sound of valve amplifiers to the sound of transistor amplifiers. This paper discusses the introductory-level physics behind that preference. Overdriving an amplifier adds harmonics to an input sound. While a moderately overdriven valve amplifier produces strong even harmonics that enhance a sound, an overdriven transistor amplifier creates strong odd harmonics that can cause dissonance. The functioning of a triode valve explains its creation of even and odd harmonics. Music production software enables the examination of both the wave shape and the harmonic content of amplified sounds.
AVE-Sesame 3: 25-MB sounding data
NASA Technical Reports Server (NTRS)
Williams, S. T.; Gerhard, M. L.; Gilchrist, L. P.; Turner, R. E.
1980-01-01
The rawinsonde sounding program for the AVE-SESAME 3 experiment is described and tabulated data at 25-mb intervals from the surface to 25 mb for the 23 National Weather Service and 19 special stations participating in the experiment are presented. Soundings were taken at 3 hr intervals beginning at 1200 GMT on April 25, 1979, and ending at 1200 GMT on April 26, 1979 (nine sounding times). The method of processing is discussed briefly, estimates of the rms errors in the data presented, an example of contact data given, reasons given for the termination of soundings below 100 mb, and soundings listed which exhibit abnormal characteristics.
Sound absorption study of raw and expanded particulate vermiculites
NASA Astrophysics Data System (ADS)
Vašina, Martin; Plachá, Daniela; Mikeska, Marcel; Hružík, Lumír; Martynková, Gražyna Simha
2016-12-01
Expanded and raw vermiculite minerals were studied for their ability to absorb sound. Phase and structural characterization of the investigated vermiculites was found similar for both types, while morphology and surface properties vary. Sound waves reflect in wedge-like structure and get minimized, and later are absorbed totally. We found that thanks to porous character of expanded vermiculite the principle of absorption of sound into layered vermiculite morphology is analogous to principle of sound minimization in "anechoic chambers." It was found in this study that the best sound damping properties of the investigated vermiculites were in general obtained at higher powder bed heights and higher excitation frequencies.
Effects of Sound on the Behavior of Wild, Unrestrained Fish Schools.
Roberts, Louise; Cheesman, Samuel; Hawkins, Anthony D
2016-01-01
To assess and manage the impact of man-made sounds on fish, we need information on how behavior is affected. Here, wild unrestrained pelagic fish schools were observed under quiet conditions using sonar. Fish were exposed to synthetic piling sounds at different levels using custom-built sound projectors, and behavioral changes were examined. In some cases, the depth of schools changed after noise playback; full dispersal of schools was also evident. The methods we developed for examining the behavior of unrestrained fish to sound exposure have proved successful and may allow further testing of the relationship between responsiveness and sound level.
Sounds produced by Australian Irrawaddy dolphins, Orcaella brevirostris.
Van Parijs, S M; Parra, G J; Corkeron, P J
2000-10-01
Sounds produced by Irrawaddy dolphins, Orcaella brevirostris, were recorded in coastal waters off northern Australia. They exhibit a varied repertoire, consisting of broadband clicks, pulsed sounds and whistles. Broad-band clicks, "creaks" and "buzz" sounds were recorded during foraging, while "squeaks" were recorded only during socializing. Both whistle types were recorded during foraging and socializing. The sounds produced by Irrawaddy dolphins do not resemble those of their nearest taxonomic relative, the killer whale, Orcinus orca. Pulsed sounds appear to resemble those produced by Sotalia and nonwhistling delphinids (e.g., Cephalorhynchus spp.). Irrawaddy dolphins exhibit a vocal repertoire that could reflect the acoustic specialization of this species to its environment.
NASA Astrophysics Data System (ADS)
Kawai, Keiji; Kojima, Takaya; Hirate, Kotaroh; Yasuoka, Masahito
2004-10-01
In this study, we conducted an experiment to investigate the evaluation structure that lies at the basis of peoples' psychological evaluation of environmental sounds. In the experiment, subjects were given cards on each of which a name of one of the environmental sounds in the specified context is written. Then they did the following three tasks: (1) to sort the cards into groups by the similarity of their impressions of the imagined sounds; (2) to name each group with the word that best represented their overall impression of the group; and (3) to evaluate all sounds on the cards using the words obtained in the previous task. These tasks were done twice: once assuming they heard the sounds at ease inside their homes and once while walking outside in a resort theme park. We analysed the similarity of imagined impression between the sounds with a cluster analysis and clusters of sounds were produced, namely, sounds labelled "natural," "transportation," and so on. A principal component analysis revealed the three major factors of the evaluation structure for both contexts and they were interpreted as preference, activity and sense of daily life.
The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound.
Menze, Sebastian; Zitterbart, Daniel P; van Opzeeland, Ilse; Boebel, Olaf
2017-01-01
This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales ( Balaenoptera musculus intermedia ), fin whales ( Balaenoptera physalus ), Antarctic minke whales ( Balaenoptera bonaerensis ) and leopard seals ( Hydrurga leptonyx ). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.
A unified approach for the spatial enhancement of sound
NASA Astrophysics Data System (ADS)
Choi, Joung-Woo; Jang, Ji-Ho; Kim, Yang-Hann
2005-09-01
This paper aims to control the sound field spatially, so that the desired or target acoustic variable is enhanced within a zone where a listener is located. This is somewhat analogous to having manipulators that can draw sounds in any place. This also means that one can somehow see the controlled shape of sound in frequency or in real time. The former assures its practical applicability, for example, listening zone control for music. The latter provides a mean of analyzing sound field. With all these regards, a unified approach is proposed that can enhance selected acoustic variables using multiple sources. Three kinds of acoustic variables that have to do with magnitude and direction of sound field are formulated and enhanced. The first one, which has to do with the spatial control of acoustic potential energy, enables one to make a zone of loud sound over an area. Otherwise, one can control directional characteristic of sound field by controlling directional energy density, or one can enhance the magnitude and direction of sound at the same time by controlling acoustic intensity. Throughout various examples, it is shown that these acoustic variables can be controlled successfully by the proposed approach.
Sound quality indicators for urban places in Paris cross-validated by Milan data.
Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre
2015-10-01
A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.
Active room compensation for sound reinforcement using sound field separation techniques.
Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena
2018-03-01
This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.
Ocean noise and marine mammals: A tutorial lecture
NASA Astrophysics Data System (ADS)
D'Spain, Gerald; Wartzok, Douglas
2004-10-01
The effect of man-made sound on marine mammals has been surrounded by controversy over the past decade. Much of this controversy stems from our lack of knowledge of the effects of noise on marine life. Ocean sound is produced during activities of great benefit to humans: commerce, exploration for energy reserves, national defense, and the study of the ocean environment itself. However, some recent strandings of marine mammals have been associated with the occurrence of human-generated sound. The documented increase of man-made sound in the ocean suggests the potential for more extensive though subtler effects than those observed in the mass strandings. The purpose of this tutorial is to present the scientific issues pertaining to ocean noise and marine mammals. Basic physics of sound in the ocean and long term trends of ocean sound will be presented. The biology of marine mammals, particularly their production, reception and use of sound in monitoring their environment, social interactions, and echolocation, will be reviewed. This background information sets the stage for understanding the effects of man-made sound on marine mammals. The extensive gaps in current knowledge with respect to marine mammal distribution and behavioral and physiological responses to sound will highlight research needs.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350
NASA Astrophysics Data System (ADS)
Chen, Xiaol; Guo, Bei; Tuo, Jinliang; Zhou, Ruixin; Lu, Yang
2017-08-01
Nowadays, people are paying more and more attention to the noise reduction of household refrigerator compressor. This paper established a sound field bounded by compressor shell and ISO3744 standard field points. The Acoustic Transfer Vector (ATV) in the sound field radiated by a refrigerator compressor shell were calculated which fits the test result preferably. Then the compressor shell surface is divided into several parts. Based on Acoustic Transfer Vector approach, the sound pressure contribution to the field points and the sound power contribution to the sound field of each part were calculated. To obtain the noise radiation in the sound field, the sound pressure cloud charts were analyzed, and the contribution curves in different frequency of each part were acquired. Meanwhile, the sound power contribution of each part in different frequency was analyzed, to ensure those parts where contributes larger sound power. Through the analysis of acoustic contribution, those parts where radiate larger noise on the compressor shell were determined. This paper provides a credible and effective approach on the structure optimal design of refrigerator compressor shell, which is meaningful in the noise and vibration reduction.
Sound pressure distribution within natural and artificial human ear canals: forward stimulation.
Ravicz, Michael E; Tao Cheng, Jeffrey; Rosowski, John J
2014-12-01
This work is part of a study of the interaction of sound pressure in the ear canal (EC) with tympanic membrane (TM) surface displacement. Sound pressures were measured with 0.5-2 mm spacing at three locations within the shortened natural EC or an artificial EC in human temporal bones: near the TM surface, within the tympanic ring plane, and in a plane transverse to the long axis of the EC. Sound pressure was also measured at 2-mm intervals along the long EC axis. The sound field is described well by the size and direction of planar sound pressure gradients, the location and orientation of standing-wave nodal lines, and the location of longitudinal standing waves along the EC axis. Standing-wave nodal lines perpendicular to the long EC axis are present on the TM surface >11-16 kHz in the natural or artificial EC. The range of sound pressures was larger in the tympanic ring plane than at the TM surface or in the transverse EC plane. Longitudinal standing-wave patterns were stretched. The tympanic-ring sound field is a useful approximation of the TM sound field, and the artificial EC approximates the natural EC.
Perception of touch quality in piano tones.
Goebl, Werner; Bresin, Roberto; Fujinaga, Ichiro
2014-11-01
Both timbre and dynamics of isolated piano tones are determined exclusively by the speed with which the hammer hits the strings. This physical view has been challenged by pianists who emphasize the importance of the way the keyboard is touched. This article presents empirical evidence from two perception experiments showing that touch-dependent sound components make sounds with identical hammer velocities but produced with different touch forms clearly distinguishable. The first experiment focused on finger-key sounds: musicians could identify pressed and struck touches. When the finger-key sounds were removed from the sounds, the effect vanished, suggesting that these sounds were the primary identification cue. The second experiment looked at key-keyframe sounds that occur when the key reaches key-bottom. Key-bottom impact was identified from key motion measured by a computer-controlled piano. Musicians were able to discriminate between piano tones that contain a key-bottom sound from those that do not. However, this effect might be attributable to sounds associated with the mechanical components of the piano action. In addition to the demonstrated acoustical effects of different touch forms, visual and tactile modalities may play important roles during piano performance that influence the production and perception of musical expression on the piano.
Melbye, Hasse; Garcia-Marcos, Luis; Brand, Paul; Everard, Mark; Priftis, Kostas; Pasterkamp, Hans
2016-01-01
Background The European Respiratory Society (ERS) lung sounds repository contains 20 audiovisual recordings of children and adults. The present study aimed at determining the interobserver variation in the classification of sounds into detailed and broader categories of crackles and wheezes. Methods Recordings from 10 children and 10 adults were classified into 10 predefined sounds by 12 observers, 6 paediatricians and 6 doctors for adult patients. Multirater kappa (Fleiss' κ) was calculated for each of the 10 adventitious sounds and for combined categories of sounds. Results The majority of observers agreed on the presence of at least one adventitious sound in 17 cases. Poor to fair agreement (κ<0.40) was usually found for the detailed descriptions of the adventitious sounds, whereas moderate to good agreement was reached for the combined categories of crackles (κ=0.62) and wheezes (κ=0.59). The paediatricians did not reach better agreement on the child cases than the family physicians and specialists in adult medicine. Conclusions Descriptions of auscultation findings in broader terms were more reliably shared between observers compared to more detailed descriptions. PMID:27158515
Lugli, Marco; Fine, Michael L
2007-11-01
The most sensitive hearing and peak frequencies of courtship calls of the stream goby, Padogobius martensii, fall within a quiet window at around 100 Hz in the ambient noise spectrum. Acoustic pressure was previously measured although Padogobius likely responds to particle motion. In this study a combination pressure (p) and particle velocity (u) detector was utilized to describe ambient noise of the habitat, the characteristics of the goby's sounds and their attenuation with distance. The ambient noise (AN) spectrum is generally similar for p and u (including the quiet window at noisy locations), although the energy distribution of u spectrum is shifted up by 50-100 Hz. The energy distribution of the goby's sounds is similar for p and u spectra of the Tonal sound, whereas the pulse-train sound exhibits larger p-u differences. Transmission loss was high for sound p and u: energy decays 6-10 dB10 cm, and sound pu ratio does not change with distance from the source in the nearfield. The measurement of particle velocity of stream AN and P. martensii sounds indicates that this species is well adapted to communicate acoustically in a complex noisy shallow-water environment.
Sound Fields in Complex Listening Environments
2011-01-01
The conditions of sound fields used in research, especially testing and fitting of hearing aids, are usually simplified or reduced to fundamental physical fields, such as the free or the diffuse sound field. The concepts of such ideal conditions are easily introduced in theoretical and experimental investigations and in models for directional microphones, for example. When it comes to real-world application of hearing aids, however, the field conditions are more complex with regard to specific stationary and transient properties in room transfer functions and the corresponding impulse responses and binaural parameters. Sound fields can be categorized in outdoor rural and urban and indoor environments. Furthermore, sound fields in closed spaces of various sizes and shapes and in situations of transport in vehicles, trains, and aircrafts are compared with regard to the binaural signals. In laboratory tests, sources of uncertainties are individual differences in binaural cues and too less controlled sound field conditions. Furthermore, laboratory sound fields do not cover the variety of complex sound environments. Spatial audio formats such as higher-order ambisonics are candidates for sound field references not only in room acoustics and audio engineering but also in audiology. PMID:21676999
The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound
NASA Astrophysics Data System (ADS)
Menze, Sebastian; Zitterbart, Daniel P.; van Opzeeland, Ilse; Boebel, Olaf
2017-01-01
This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales (Balaenoptera musculus intermedia), fin whales (Balaenoptera physalus), Antarctic minke whales (Balaenoptera bonaerensis) and leopard seals (Hydrurga leptonyx). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.
Inside-in, alternative paradigms for sound spatialization
NASA Astrophysics Data System (ADS)
Bahn, Curtis; Moore, Stephan
2003-04-01
Arrays of widely spaced mono-directional loudspeakers (P.A.-style stereo configurations or ``outside-in'' surround-sound systems) have long provided the dominant paradigms for electronic sound diffusion. So prevalent are these models that alternatives have largely been ignored and electronic sound, regardless of musical aesthetic, has come to be inseparably associated with single-channel speakers, or headphones. We recognize the value of these familiar paradigms, but believe that electronic sound can and should have many alternative, idiosyncratic voices. Through the design and construction of unique sound diffusion structures, one can reinvent the nature of electronic sound; when allied with new sensor technologies, these structures offer alternative modes of interaction with techniques of sonic computation. This paper describes several recent applications of spherical speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays (SenSAs: combinations of various sensor devices with outward-radiating multi-channel speaker arrays). This presentation introduces the development of four generations of spherical speakers-over a hundred individual speakers of various configurations-and their use in many different musical situations including live performance, recording, and sound installation. We describe the design and construction of these systems, and, more generally, the new ``voices'' they give to electronic sound.
Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis
Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun Daniel; Carlson, Thomas J.
2012-01-01
Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. In this paper, we provide a detailed description of a new software package, the Aquatic Acoustic Metrics Interface (AAMI), specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame. The features of the AAMI software are discussed, and several case studies are presented to illustrate its functionality. PMID:22969353
A quasi two-dimensional model for sound attenuation by the sonic crystals.
Gupta, A; Lim, K M; Chew, C H
2012-10-01
Sound propagation in the sonic crystal (SC) along the symmetry direction is modeled by sound propagation through a variable cross-sectional area waveguide. A one-dimensional (1D) model based on the Webster horn equation is used to obtain sound attenuation through the SC. This model is compared with two-dimensional (2D) finite element simulation and experiment. The 1D model prediction of frequency band for sound attenuation is found to be shifted by around 500 Hz with respect to the finite element simulation. The reason for this shift is due to the assumption involved in the 1D model. A quasi 2D model is developed for sound propagation through the waveguide. Sound pressure profiles from the quasi 2D model are compared with the finite element simulation and the 1D model. The result shows significant improvement over the 1D model and is in good agreement with the 2D finite element simulation. Finally, sound attenuation through the SC is computed based on the quasi 2D model and is found to be in good agreement with the finite element simulation. The quasi 2D model provides an improved method to calculate sound attenuation through the SC.
The Role of Soundscape in Nature-Based Rehabilitation: A Patient Perspective.
Cerwén, Gunnar; Pedersen, Eja; Pálsdóttir, Anna-María
2016-12-11
Nature-based rehabilitation (NBR) has convincing support in research, yet the underlying mechanisms are not fully understood. The present study sought to increase understanding of the role of soundscapes in NBR, an aspect paid little attention thus far. Transcribed interviews with 59 patients suffering from stress-related mental disorders and undergoing a 12-week therapy programme in the rehabilitation garden in Alnarp, Sweden, were analysed using Interpretative Phenomenology Analysis (IPA). Described sounds were categorised as natural, technological or human. The results showed that patients frequently referred to natural sounds as being part of a pleasant and "quiet" experience that supported recovery and induced "soft fascination". Technological sounds were experienced as disturbing, while perception of human sounds varied depending on loudness and the social context. The study further uncovered how sound influenced patients' behaviour and experiences in the garden, through examination of three cross-theme dimensions that materialised in the study; sound in relation to overall perception, sound in relation to garden usage, and increased susceptibility to sound. The findings are discussed in relation to NBR; the need for a more nuanced understanding of susceptibility to sound among people suffering from mental fatigue was identified and design considerations for future rehabilitation gardens were formulated.
The influence of crowd density on the sound environment of commercial pedestrian streets.
Meng, Qi; Kang, Jian
2015-04-01
Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.
21 CFR 870.2860 - Heart sound transducer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...
78 FR 18616 - Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) Charter Renewal
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-27
... DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2013-0088] Prince William Sound... the Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) as an alternative voluntary advisory group for Prince William Sound, Alaska. This certification allows the PWSRCAC to monitor the...
75 FR 16159 - Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) Charter Renewal
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-31
... DEPARTMENT OF HOMELAND SECURITY Coast Guard [USCG-2010-0121] Prince William Sound Regional... the Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) as an alternative voluntary advisory group for Prince William Sound, Alaska. This certification allows the PWSRCAC to monitor the...
75 FR 35651 - Safety Zone, Long Island Sound Annual Fireworks Displays
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
... Zone, Long Island Sound Annual Fireworks Displays AGENCY: Coast Guard, DHS. ACTION: Notice of... thirteen fireworks displays taking place throughout the Sector Long Island Sound Captain of the Port Zone... Sector Long Island Sound (203) 468 4454 [email protected] . SUPPLEMENTARY INFORMATION: The Coast...
76 FR 24506 - Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) Charter Renewal
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-02
... DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2011-0142] Prince William Sound... the Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) as an alternative voluntary advisory group for Prince William Sound, Alaska. This certification allows the PWSRCAC to monitor the...
21 CFR 870.2860 - Heart sound transducer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...
21 CFR 870.2860 - Heart sound transducer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...
21 CFR 870.2860 - Heart sound transducer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...
21 CFR 870.2860 - Heart sound transducer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...
Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope.
Ching, Siok Siong; Tan, Yih Kai
2012-09-07
To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann(®) Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen between acute large bowel obstruction and large bowel pseudo-obstruction. For patients with small bowel obstruction, the sound-to-sound interval was significantly longer in those who subsequently underwent surgery compared with those treated non-operatively (median 1.29 s vs 0.63 s, P < 0.001). There was no correlation between bowel calibre and bowel sound characteristics in both acute small bowel obstruction and acute large bowel obstruction. Auscultation of bowel sounds is non-specific for diagnosing bowel obstruction. Differences in sound characteristics between large bowel and small bowel obstruction may help determine the likely site of obstruction.
Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope
Ching, Siok Siong; Tan, Yih Kai
2012-01-01
AIM: To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. METHODS: Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann® Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. RESULTS: A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen between acute large bowel obstruction and large bowel pseudo-obstruction. For patients with small bowel obstruction, the sound-to-sound interval was significantly longer in those who subsequently underwent surgery compared with those treated non-operatively (median 1.29 s vs 0.63 s, P < 0.001). There was no correlation between bowel calibre and bowel sound characteristics in both acute small bowel obstruction and acute large bowel obstruction. CONCLUSION: Auscultation of bowel sounds is non-specific for diagnosing bowel obstruction. Differences in sound characteristics between large bowel and small bowel obstruction may help determine the likely site of obstruction. PMID:22969233
Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah
2018-01-01
Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.
Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah
2018-01-01
Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children. PMID:29674986
Design and Implementation of Sound Searching Robots in Wireless Sensor Networks
Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao
2016-01-01
A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well. PMID:27657088
Design of Alarm Sound of Home Care Equipment Based on Age-related Auditory Sense
NASA Astrophysics Data System (ADS)
Shibano, Jun-Ichi; Tadano, Shigeru; Kaneko, Hirotaka
A wide variety of home care equipment has been developed to support the independent lifestyle and care taking of elderly persons. Almost all of the equipment has an alarm designed to alert a care person or to sound a warning in case of an emergency. Due to the fact that aging human beings' senses physiologically, weaken and deteriorate, each alarm's sound must be designed to account for the full range of elderly person's hearing loss. Since the alarms are usually heard indoors, it is also necessary to evaluate the relationship between the basic characteristics of the sounds and living area's layout. In this study, we investigated the sounds of various alarms of the home care equipment based on both the age-related hearing characteristics of elderly persons and the propagation property of the sounds indoors. As a result, it was determined that the hearing characteristics of elderly persons are attuned to sounds which have a frequency from 700Hz to 1kHz, and it was learned that the indoor absorption ratio of sound is smallest when the frequency is 1kHz. Therefore, a frequency of 1kHz is good for the alarm sound of home care equipment. A flow chart to design the alarm sound of home care equipment was proposed, taking into account the extent of age-related auditory sense deterioration.
Biological Effect of Audible Sound Control on Mung Bean (Vigna radiate) Sprout
Cai, W.; He, H.; Zhu, S.; Wang, N.
2014-01-01
Audible sound (20–20000 Hz) widely exists in natural world. However, the interaction between audible sound and the growth of plants is usually neglected in biophysics research. Not much effort has been put forth in studying the relation of plant and audible sound. In this work, the effect of audible sound on germination and growth of mung bean (Vigna radiate) was studied under laboratory condition. Audible sound ranging 1000–1500 Hz, 1500–2000 Hz, and 2000–2500 Hz and intensities [80 dB (A), 90 dB (A), 100 dB (A)] were used to stimulate mung bean for 72 hours. The growth of mung bean was evaluated in terms of mean germination time, total length, and total fresh weight. Experimental results indicated that the sound wave can reduce the germination period of mung bean and the mung bean under treatments of sound with intensity around 90 dB and frequency around 2000 Hz and significant increase in growth. Audible sound treatment can promote the growth of mung bean differently for distinct frequency and intensity. The study provides us with a way to understand the effects and rules of sound field on plant growth and a new way to improve the production of mung bean. PMID:25170517
The sound intensity and characteristics of variable-pitch pulse oximeters.
Yamanaka, Hiroo; Haruna, Junichi; Mashimo, Takashi; Akita, Takeshi; Kinouchi, Keiko
2008-06-01
Various studies worldwide have found that sound levels in hospitals significantly exceed the World Health Organization (WHO) guidelines, and that this noise is associated with audible signals from various medical devices. The pulse oximeter is now widely used in health care; however the health effects associated with the noise from this equipment remain largely unclarified. Here, we analyzed the sounds of variable-pitch pulse oximeters, and discussed the possible associated risk of sleep disturbance, annoyance, and hearing loss. The Nellcor N 595 and Masimo SET Radical pulse oximeters were measured for equivalent continuous A-weighted sound pressure levels (L(Aeq)), loudness levels, and loudness. Pulse beep pitches were also identified using Fast Fourier Transform (FFT) analysis and compared with musical pitches as controls. Almost all alarm sounds and pulse beeps from the instruments tested exceeded 30 dBA, a level that may induce sleep disturbance and annoyance. Several alarm sounds emitted by the pulse oximeters exceeded 70 dBA, which is known to induce hearing loss. The loudness of the alarm sound of each pulse oximeter did not change in proportion to the sound volume level. The pitch of each pulse beep did not correspond to musical pitch levels. The results indicate that sounds from pulse oximeters pose a potential risk of not only sleep disturbance and annoyance but also hearing loss, and that these sounds are unnatural for human auditory perception.
The acoustic performance of double-skin facades: A design support tool for architects
NASA Astrophysics Data System (ADS)
Batungbakal, Aireen
This study assesses and validates the influence of measuring sound in the urban environment and the influence of glass facade components in reducing sound transmission to the indoor environment. Among the most reported issues affecting workspaces, increased awareness to minimize noise led building designers to reconsider the design of building envelopes and its site environment. Outdoor sound conditions, such as traffic noise, challenge designers to accurately estimate the capability of glass facades in acquiring an appropriate indoor sound quality. Indicating the density of the urban environment, field-tests acquired existing sound levels in areas of high commercial development, employment, and traffic activity, establishing a baseline for sound levels common in urban work areas. Composed from the direct sound transmission loss of glass facades simulated through INSUL, a sound insulation software, data is utilized as an informative tool correlating the response of glass facade components towards existing outdoor sound levels of a project site in order to achieve desired indoor sound levels. This study progresses to link the disconnection in validating the acoustic performance of glass facades early in a project's design, from conditioned settings such as field-testing and simulations to project completion. Results obtained from the study's facade simulations and facade comparison supports that acoustic comfort is not limited to a singular solution, but multiple design options responsive to its environment.
Park, H K; Bradley, J S
2009-07-01
This paper reports the results of an evaluation of the merits of standard airborne sound insulation measures with respect to subjective ratings of the annoyance and loudness of transmitted sounds. Subjects listened to speech and music sounds modified to represent transmission through 20 different walls with sound transmission class (STC) ratings from 34 to 58. A number of variations in the standard measures were also considered. These included variations in the 8-dB rule for the maximum allowed deficiency in the STC measure as well as variations in the standard 32-dB total allowed deficiency. Several spectrum adaptation terms were considered in combination with weighted sound reduction index (R(w)) values as well as modifications to the range of included frequencies in the standard rating contour. A STC measure without an 8-dB rule and an R(w) rating with a new spectrum adaptation term were better predictors of annoyance and loudness ratings of speech sounds. R(w) ratings with one of two modified C(tr) spectrum adaptation terms were better predictors of annoyance and loudness ratings of transmitted music sounds. Although some measures were much better predictors of responses to one type of sound than were the standard STC and R(w) values, no measure was remarkably improved for predicting annoyance and loudness ratings of both music and speech sounds.
Moderate acoustic changes can disrupt the sleep of very preterm infants in their incubators.
Kuhn, Pierre; Zores, Claire; Langlet, Claire; Escande, Benoît; Astruc, Dominique; Dufour, André
2013-10-01
To evaluate the impact of moderate noise on the sleep of very early preterm infants (VPI). Observational study of 26 VPI of 26-31 weeks' gestation, with prospective measurements of sound pressure level and concomitant video records. Sound peaks were identified and classified according to their signal-to-noise ratio (SNR) above background noise. Prechtl's arousal states during sound peaks were assessed by two observers blinded to the purpose of the study. Changes in sleep/arousal states following sound peaks were compared with spontaneous changes during randomly selected periods without sound peaks. We identified 598 isolated sound peaks (5 ≤ SNR < 10 decibel slow response A (dBA), n = 518; 10 ≤ SNR < 15 dBA, n = 80) during sleep. Awakenings were observed during 33.8% (95% CI, 24-43.7%) of exposures to sound peaks of 5-10 dBA SNR and 39.7% (95% CI, 26-53.3%) of exposures to sound peaks of SNR 10-15 dBA, but only 11.7% (95% CI, 6.2-17.1%) of control periods. The proportions of awakenings following sound peaks were higher than the proportions of arousals during control periods (p < 0.005). Moderate acoustic changes can disrupt the sleep of VPI, and efficient sound abatement measures are needed. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Su, Guoshao; Shi, Yanjiong; Feng, Xiating; Jiang, Jianqing; Zhang, Jie; Jiang, Quan
2018-02-01
Rockbursts are markedly characterized by the ejection of rock fragments from host rocks at certain speeds. The rockburst process is always accompanied by acoustic signals that include acoustic emissions (AE) and sounds. A deep insight into the evolutionary features of AE and sound signals is important to improve the accuracy of rockburst prediction. To investigate the evolutionary features of AE and sound signals, rockburst tests on granite rock specimens under true-triaxial loading conditions were performed using an improved rockburst testing system, and the AE and sounds during rockburst development were recorded and analyzed. The results show that the evolutionary features of the AE and sound signals were obvious and similar. On the eve of a rockburst, a `quiescent period' could be observed in both the evolutionary process of the AE hits and the sound waveform. Furthermore, the time-dependent fractal dimensions of the AE hits and sound amplitude both showed a tendency to continuously decrease on the eve of the rockbursts. In addition, on the eve of the rockbursts, the main frequency of the AE and sound signals both showed decreasing trends, and the frequency spectrum distributions were both characterized by low amplitudes, wide frequency bands and multiple peak shapes. Thus, the evolutionary features of sound signals on the eve of rockbursts, as well as that of AE signals, can be used as beneficial information for rockburst prediction.
Sound pressure distribution and power flow within the gerbil ear canal from 100 Hz to 80 kHz
Ravicz, Michael E.; Olson, Elizabeth S.; Rosowski, John J.
2008-01-01
Sound pressure was mapped in the bony ear canal of gerbils during closed-field sound stimulation at frequencies from 0.1 to 80 kHz. A 1.27-mm-diam probe-tube microphone or a 0.17-mm-diam fiber-optic miniature microphone was positioned along approximately longitudinal trajectories within the 2.3-mm-diam ear canal. Substantial spatial variations in sound pressure, sharp minima in magnitude, and half-cycle phase changes occurred at frequencies >30 kHz. The sound frequencies of these transitions increased with decreasing distance from the tympanic membrane (TM). Sound pressure measured orthogonally across the surface of the TM showed only small variations at frequencies below 60 kHz. Hence, the ear canal sound field can be described fairly well as a one-dimensional standing wave pattern. Ear-canal power reflectance estimated from longitudinal spatial variations was roughly constant at 0.2–0.5 at frequencies between 30 and 45 kHz. In contrast, reflectance increased at higher frequencies to at least 0.8 above 60 kHz. Sound pressure was also mapped in a microphone-terminated uniform tube—an “artificial ear.” Comparison with ear canal sound fields suggests that an artificial ear or “artificial cavity calibration” technique may underestimate the in situ sound pressure by 5–15 dB between 40 and 60 kHz. PMID:17902852
Knauert, Melissa; Jeon, Sangchoon; Murphy, Terrence E.; Yaggi, H. Klar; Pisani, Margaret A.; Redeker, Nancy S.
2016-01-01
Purpose Sound levels in the intensive care unit (ICU) are universally elevated and are believed to contribute to sleep and circadian disruption. The purpose of this study is to compare overnight ICU sound levels and peak occurrence on A- versus C-weighted scales. Materials and Methods This was a prospective observational study of overnight sound levels in 59 medical ICU patient rooms. Sound level was recorded every 10 seconds on A- and C-weighted decibel scales. Equivalent sound level (Leq) and sound peaks were reported for full and partial night periods. Results The overnight A-weighted Leq of 53.6 dBA was well above World Health Organization (WHO) recommendations; overnight C-weighted Leq was 63.1 dBC (no WHO recommendations). Peak sound occurrence ranged from 1.8 to 23.3 times per hour. Illness severity, mechanical ventilation and delirium were not associated with Leq or peak occurrence. Leq and peak measures for A- and C-weighted decibel scales were significantly different from each other. Conclusions Sound levels in the medical ICU are high throughout the night. Patient factors were not associated with Leq or peak occurrence. Significant discordance between A- and C-weighted values suggests that low frequency sound is a meaningful factor in the medical ICU environment. PMID:27546739
Design and Implementation of Sound Searching Robots in Wireless Sensor Networks.
Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao
2016-09-21
A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.
Auditory stimuli elicit hippocampal neuronal responses during sleep
Vinnik, Ekaterina; Antopolskiy, Sergey; Itskov, Pavel M.; Diamond, Mathew E.
2012-01-01
To investigate how hippocampal neurons code behaviorally salient stimuli, we recorded from neurons in the CA1 region of hippocampus in rats while they learned to associate the presence of sound with water reward. Rats learned to alternate between two reward ports at which, in 50% of the trials, sound stimuli were presented followed by water reward after a 3-s delay. Sound at the water port predicted subsequent reward delivery in 100% of the trials and the absence of sound predicted reward omission. During this task, 40% of recorded neurons fired differently according to which of the two reward ports the rat was visiting. A smaller fraction of neurons demonstrated onset response to sound/nosepoke (19%) and reward delivery (24%). When the sounds were played during passive wakefulness, 8% of neurons responded with short latency onset responses; 25% of neurons responded to sounds when they were played during sleep. During sleep the short-latency responses in hippocampus are intermingled with long lasting responses which in the current experiment could last for 1–2 s. Based on the current findings and the results of previous experiments we described the existence of two types of hippocampal neuronal responses to sounds: sound-onset responses with very short latency and longer-lasting sound-specific responses that are likely to be present when the animal is actively engaged in the task. PMID:22754507
Topological phononic states of underwater sound based on coupled ring resonators
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Cheng; Li, Zheng; Ni, Xu
We report a design of topological phononic states for underwater sound using arrays of acoustic coupled ring resonators. In each individual ring resonator, two degenerate acoustic modes, corresponding to clockwise and counter-clockwise propagation, are treated as opposite pseudospins. The gapless edge states arise in the bandgap resulting in protected pseudospin-dependent sound transportation, which is a phononic analogue of the quantum spin Hall effect. We also investigate the robustness of the topological sound state, suggesting that the observed pseudospin-dependent sound transportation remains unless the introduced defects facilitate coupling between the clockwise and counter-clockwise modes (in other words, the original mode degeneracymore » is broken). The topological engineering of sound transportation will certainly promise unique design for next generation of acoustic devices in sound guiding and switching, especially for underwater acoustic devices.« less
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
NASA Technical Reports Server (NTRS)
Atlas, R.
1980-01-01
In January of 1978, a panel of experts recommended that a 'special effort' be made to enhance and edit satellite soundings and cloud tracked winds in data sparse regions. It was felt that these activities would be necessary to obtain maximum benefits from an evaluation of satellite data during the Global Weather Experiment (FGGE). The 'special effort' is being conducted for the two special observing periods of FGGE. More than sixty cases have been selected for enhancement on the basis of meteorological interest. These cases include situations of blocking, cutoff low development, cyclogenesis, and tropical circulations. The sounding data enhancement process consists of supplementing the operational satellite sounding data set with higher resolution soundings in meteorologically active regions, and with new soundings where data voids or soundings of questionable quality exist.
Toward blind removal of unwanted sound from orchestrated music
NASA Astrophysics Data System (ADS)
Chang, Soo-Young; Chun, Joohwan
2000-11-01
The problem addressed in this paper is to removing unwanted sounds from music sound. The sound to be removed could be disturbance such as cough. We shall present some preliminary results on this problem using statistical properties of signals. Our approach consists of three steps. We first estimate the fundamental frequencies and partials given noise-corrupted music sound. This gives us the autoregressive (AR) model of the music sound. Then we filter the noise-corrupted sound using the AR parameters. The filtered signal is then subtracted from the original noise-corrupted signal to get the disturbance. Finally, the obtained disturbance is used a reference signal to eliminate the disturbance from the noise- corrupted music signal. Above three steps are carried out in a recursive manner using a sliding window or an infinitely growing window with an appropriate forgetting factor.
Sounds and meanings working together: Word learning as a collaborative effort
Saffran, Jenny
2014-01-01
Over the past several decades, researchers have discovered a great deal of information about the processes underlying language acquisition. From as early as they can be studied, infants are sensitive to the nuances of native-language sound structure. Similarly, infants are attuned to the visual and conceptual structure of their environments starting in the early postnatal period. Months later, they become adept at putting these two arenas of experience together, mapping sounds to meanings. How might learning sounds influence learning meanings, and vice versa? In this paper, I will describe several recent lines of research suggesting that knowledge concerning the sound structure of language facilitates subsequent mapping of sounds to meanings. I will also discuss recent findings suggesting that from its beginnings, the lexicon incorporates relationships amongst the sounds and meanings of newly learned words. PMID:25202163
AVE/VAS 3: 25-mb sounding data
NASA Technical Reports Server (NTRS)
Sienkiewicz, M. E.
1982-01-01
The rawinsonde sounding program for the AVE/VAS 3 experiment is described. Tabulated data are presented at 25-mb intervals for the 24 National Weather Service stations and 14 special stations participating in the experiment. Soundings were taken at 3-hr intervals, beginning at 1200 GMT on March 27, 1982, and ending at 0600 GMT on March 28, 1982 (7 sounding times). An additional sounding was taken at the National Weather Service stations at 1200 GMT on March 28, 1982, at the normal synoptic observation time. The method of processing soundings is briefly discussed, estimates of the RMS errors in the data are presented, and an example of contact data is given. Termination pressures of soundings taken in the mesos-beta-scale network are tabulated, as are observations of ground temperature at a depth of 2 cm.
Gender Gaps in Letter-Sound Knowledge Persist Across the First School Year
Sigmundsson, Hermundur; Dybfest Eriksen, Adrian; Ofteland, Greta S.; Haga, Monika
2018-01-01
Literacy is the cornerstone of a primary school education and enables the intellectual and social development of young children. Letter-sound knowledge has been identified as critical for developing proficiency in reading. This study explored the development of letter-sound knowledge in relation to gender during the first year of primary school. 485 Norwegian children aged 5–6 years completed assessment of letter-sound knowledge, i.e., uppercase letters- name; uppercase letter -sound; lowercase letters- name; lowercase letter-sound. The children were tested in the beginning, middle, and end of their first school year. The results revealed a clear gender difference in all four variables in favor of the girls which were relatively constant over time. Implications for understanding the role of gender and letter-sound knowledge for later reading performance are discussed. PMID:29662461
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
2017-02-04
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; Macpherson, Megan; Weber-Fox, Christine
2013-04-01
Using electrophysiology, we have examined two questions in relation to musical training - namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 event-related potential component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We therefore conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians' accuracy tended to suffer less from the change in timbre of the sounds, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. © 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
Tanaka, Tagayasu; Inaba, Ryoichi; Aoyama, Atsuhito
2016-01-01
Objectives: This study investigated the actual situation of noise and low-frequency sounds in firework events and their impact on pyrotechnicians. Methods: Data on firework noise and low-frequency sounds were obtained at a point located approximately 100 m away from the launch site of a firework display held in "A" City in 2013. We obtained the data by continuously measuring and analyzing the equivalent continuous sound level (Leq) and the one-third octave band of the noise and low-frequency sounds emanating from the major firework detonations, and predicted sound levels at the original launch site. Results: Sound levels of 100-115 dB and low-frequency sounds of 100-125 dB were observed at night. The maximum and mean Leq values were 97 and 95 dB, respectively. The launching noise level predicted from the sounds (85 dB) at the noise measurement point was 133 dB. Occupational exposure to noise for pyrotechnicians at the remote operation point (located 20-30 m away from the launch site) was estimated to be below 100 dB. Conclusions: Pyrotechnicians are exposed to very loud noise (>100 dB) at the launch point. We believe that it is necessary to implement measures such as fixing earplugs or earmuffs, posting a warning at the workplace, and executing a remote launching operation to prevent hearing loss caused by occupational exposure of pyrotechnicians to noise. It is predicted that both sound levels and low-frequency sounds would be reduced by approximately 35 dB at the remote operation site. PMID:27725489
Tanaka, Tagayasu; Inaba, Ryoichi; Aoyama, Atsuhito
2016-11-29
This study investigated the actual situation of noise and low-frequency sounds in firework events and their impact on pyrotechnicians. Data on firework noise and low-frequency sounds were obtained at a point located approximately 100 m away from the launch site of a firework display held in "A" City in 2013. We obtained the data by continuously measuring and analyzing the equivalent continuous sound level (Leq) and the one-third octave band of the noise and low-frequency sounds emanating from the major firework detonations, and predicted sound levels at the original launch site. Sound levels of 100-115 dB and low-frequency sounds of 100-125 dB were observed at night. The maximum and mean Leq values were 97 and 95 dB, respectively. The launching noise level predicted from the sounds (85 dB) at the noise measurement point was 133 dB. Occupational exposure to noise for pyrotechnicians at the remote operation point (located 20-30 m away from the launch site) was estimated to be below 100 dB. Pyrotechnicians are exposed to very loud noise (>100 dB) at the launch point. We believe that it is necessary to implement measures such as fixing earplugs or earmuffs, posting a warning at the workplace, and executing a remote launching operation to prevent hearing loss caused by occupational exposure of pyrotechnicians to noise. It is predicted that both sound levels and low-frequency sounds would be reduced by approximately 35 dB at the remote operation site.
Guo, Bin; Huang, Jing; Guo, Xin-biao
2015-06-18
To evaluate the preventive effects of sound insulation windows on traffic noise. Indoor noise levels of the residential rooms (on both the North 4th ring road side and the campus side) with closed sound insulation windows were measured using the sound level meter, and comparisons with the simultaneously measured outdoor noise levels were made. In addition, differences of indoor noise levels between rooms with closed sound insulation windows and open sound insulation windows were also compared. The average outdoor noise levels of the North 4th ring road was higher than 70 dB(A), which exceeded the limitation stated in the "Environmental Quality Standard for Noise" (GB 3096-2008) in our country. However, with the sound insulation windows closed, the indoor noise levels reduced significantly to the level under 35 dB(A) (P<0.05), which complied with the indoor noise level standards in our country. The closed or open states of the sound insulation windows had significant influence on the indoor noise levels (P<0.05). Compared with the open state of the sound insulation window, when the sound insulation windows were closed, the indoor noise levels reduced 18.8 dB(A) and 8.3 dB(A) in residential rooms facing North 4th ring road side and campus side, respectively. The results indicated that installation of insulation windows had significant noise reduction effects on street residential buildings especially on the rooms facing major traffic roads. Installation of the sound insulation windows has significant preventive effects on indoor noise in the street residential building.
Contribution of the AIRS Shortwave Sounding Channels to Retrieval Accuracy
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis
2006-01-01
AIRS contains 2376 high spectral resolution channels between 650/cm and 2665/cm, including channels in both the 15 micron (near 667/cm) and 4.2 micron (near 2400/cm) COP sounding bands. Use of temperature sounding channels in the 15 micron CO2 band has considerable heritage in infra-red remote sensing. Channels in the 4.2 micron CO2 band have potential advantages for temperature sounding purposes because they are essentially insensitive to absorption by water vapor and ozone, and also have considerably sharper lower tropospheric temperature sounding weighting functions than do the 15 micron temperature sounding channels. Potential drawbacks with regard to use of 4.2 micron channels arise from effects on the observed radiances of solar radiation reflected by the surface and clouds, as well as effects of non-local thermodynamic equilibrium on shortwave observations during the day. These are of no practical consequences, however, when properly accounted for. We show results of experiments performed utilizing different spectral regions of AIRS, conducted with the AIRS Science Team candidate Version 5 algorithm. Experiments were performed using temperature sounding channels within the entire AIRS spectral coverage, within only the spectral region 650/cm to 1614 /cm; and within only the spectral region 1000/cm-2665/cm. These show the relative importance of utilizing only 15 micron temperature sounding channels, only the 4.2 micron temperature sounding channels, and both, with regards to sounding accuracy. The spectral region 2380/cm to 2400/cm is shown to contribute significantly to improve sounding accuracy in the lower troposphere, both day and night.
The Effects of Phonetic Similarity and List Length on Children's Sound Categorization Performance.
ERIC Educational Resources Information Center
Snowling, Margaret J.; And Others
1994-01-01
Examined the phonological analysis and verbal working memory components of the sound categorization task and their relationships to reading skill differences. Children were tested on sound categorization by having them identify odd words in sequences. Sound categorization performance was sensitive to individual differences in speech perception…
ERIC Educational Resources Information Center
Piasta, Shayne B.; Phillips, Beth M.; Williams, Jeffrey M.; Bowles, Ryan P.; Anthony, Jason L.
2016-01-01
Early childhood teachers are increasingly encouraged to support children's development of letter-sound abilities. Assessment of letter-sound knowledge is key in planning for effective instruction, yet the letter-sound knowledge assessments currently available and suitable for preschool-age children demonstrate significant limitations. The purpose…
Designing Trend-Monitoring Sounds for Helicopters: Methodological Issues and an Application
ERIC Educational Resources Information Center
Edworthy, Judy; Hellier, Elizabeth; Aldrich, Kirsteen; Loxley, Sarah
2004-01-01
This article explores methodological issues in sonification and sound design arising from the design of helicopter monitoring sounds. Six monitoring sounds (each with 5 levels) were tested for similarity and meaning with 3 different techniques: hierarchical cluster analysis, linkage analysis, and multidimensional scaling. In Experiment 1,…
24 CFR 51.106 - Implementation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... day-night average sound level data are not available may be evaluated from NEF or CNEL analyses using.... The day-night average sound level may be estimated from the design hour L10 or Leq values by the.... The Department of Defense uses day-night average sound level based on C-weighted sound level...
Code of Federal Regulations, 2014 CFR
2014-07-01
...; Pacific Sound Resources and LockheedShipyard Superfund Sites, Elliott Bay, Seattle, WA. 165.1336 Section... Area; Pacific Sound Resources and LockheedShipyard Superfund Sites, Elliott Bay, Seattle, WA. (a... Pacific Sound Resources and Lockheed Shipyard EPA superfund sites. Vessels may otherwise transit or...
Code of Federal Regulations, 2013 CFR
2013-07-01
...; Pacific Sound Resources and LockheedShipyard Superfund Sites, Elliott Bay, Seattle, WA. 165.1336 Section... Area; Pacific Sound Resources and LockheedShipyard Superfund Sites, Elliott Bay, Seattle, WA. (a... Pacific Sound Resources and Lockheed Shipyard EPA superfund sites. Vessels may otherwise transit or...
Code of Federal Regulations, 2012 CFR
2012-07-01
...; Pacific Sound Resources and LockheedShipyard Superfund Sites, Elliott Bay, Seattle, WA. 165.1336 Section... Area; Pacific Sound Resources and LockheedShipyard Superfund Sites, Elliott Bay, Seattle, WA. (a... Pacific Sound Resources and Lockheed Shipyard EPA superfund sites. Vessels may otherwise transit or...
A Lexical Analysis of Environmental Sound Categories
ERIC Educational Resources Information Center
Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel
2012-01-01
In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…
LONG ISLAND SOUND STUDY COMPREHENSIVE CONSERVATION AND MANAGEMENT PLAN
Long Island Sound is an estuary, a place where salt water from the ocean mixes with fresh water from rivers and the land. Like other estuaries, Long Island Sound (the Sound) abounds in fish, shellfish, and waterfowl. It provides feeding, breeding, nesting, and nursery areas for d...
Sounds Alive: A Noise Workbook.
ERIC Educational Resources Information Center
Dickman, Donna McCord
Sarah Screech, Danny Decibel, Sweetie Sound and Neil Noisy describe their experiences in the world of sound and noise to elementary students. Presented are their reports, games and charts which address sound measurement, the effects of noise on people, methods of noise control, and related areas. The workbook is intended to stimulate students'…
The Early Years: Becoming Attuned to Sound
ERIC Educational Resources Information Center
Ashbrook, Peggy
2014-01-01
Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…
40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 18 2014-07-01 2014-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...
40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...
40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 18 2012-07-01 2012-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...
40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 18 2013-07-01 2013-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...
The Specificity of Sound Symbolic Correspondences in Spoken Language
ERIC Educational Resources Information Center
Tzeng, Christina Y.; Nygaard, Lynne C.; Namy, Laura L.
2017-01-01
Although language has long been regarded as a primarily arbitrary system, "sound symbolism," or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these…
33 CFR 161.60 - Vessel Traffic Service Prince William Sound.
Code of Federal Regulations, 2010 CFR
2010-07-01
... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...
32 CFR 552.64 - Sound insurance underwriting and programing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true Sound insurance underwriting and programing. 552... Reservations § 552.64 Sound insurance underwriting and programing. The Department of the Army encourages the acquisition of a sound insurance program that is suitably underwritten to meet the varying needs of the...
33 CFR 110.230 - Puget Sound Area, Wash.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Puget Sound Area, Wash. 110.230... ANCHORAGE REGULATIONS Anchorage Grounds § 110.230 Puget Sound Area, Wash. (a) The anchorage grounds—(1... shores of Whidbey Island. (4) Port Gardner General Anchorage, Possession Sound. Beginning at a point...
76 FR 14279 - Drawbridge Operation Regulation; Grassy Sound Channel, Middle Township, NJ
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-16
... Operation Regulation; Grassy Sound Channel, Middle Township, NJ AGENCY: Coast Guard, DHS. ACTION: Notice of... temporary deviation from the regulations governing the operation of the Grassy Sound Channel Bridge across the Grassy Sound Channel, mile 1.0, at Middle Township, NJ. The deviation is necessary to facilitate...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... Prince William Sound Regional Citizens' Advisory Council AGENCY: Coast Guard, DHS. ACTION: Notice of... on, the application for recertification submitted by the Prince William Sound Regional Citizen's... advisory group in lieu of a Regional Citizens' Advisory Council for Prince William Sound, Alaska. This...
Sound production in the clownfish Amphiprion clarkii.
Parmentier, Eric; Colleye, Orphal; Fine, Michael L; Frédérich, Bruno; Vandewalle, Pierre; Herrel, Anthony
2007-05-18
Although clownfish sounds were recorded as early as 1930, the mechanism of sound production has remained obscure. Yet, clownfish are prolific "singers" that produce a wide variety of sounds, described as "chirps" and "pops" in both reproductive and agonistic behavioral contexts. Here, we describe the sonic mechanism of the clownfish Amphiprion clarkii.
Sound-Symbolism Boosts Novel Word Learning
ERIC Educational Resources Information Center
Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter
2016-01-01
The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…
33 CFR 161.60 - Vessel Traffic Service Prince William Sound.
Code of Federal Regulations, 2013 CFR
2013-07-01
... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...
32 CFR 552.64 - Sound insurance underwriting and programing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 3 2012-07-01 2009-07-01 true Sound insurance underwriting and programing. 552... Reservations § 552.64 Sound insurance underwriting and programing. The Department of the Army encourages the acquisition of a sound insurance program that is suitably underwritten to meet the varying needs of the...
32 CFR 552.64 - Sound insurance underwriting and programing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 3 2014-07-01 2014-07-01 false Sound insurance underwriting and programing. 552... Reservations § 552.64 Sound insurance underwriting and programing. The Department of the Army encourages the acquisition of a sound insurance program that is suitably underwritten to meet the varying needs of the...
33 CFR 110.146 - Long Island Sound.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Long Island Sound. 110.146... ANCHORAGE REGULATIONS Anchorage Grounds § 110.146 Long Island Sound. (a) Anchorage grounds. (1) Bridgeport Anchorage Ground. That portion of Long Island Sound enclosed by a line connecting the following points...
33 CFR 161.60 - Vessel Traffic Service Prince William Sound.
Code of Federal Regulations, 2014 CFR
2014-07-01
... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...
32 CFR 552.64 - Sound insurance underwriting and programing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 3 2013-07-01 2013-07-01 false Sound insurance underwriting and programing. 552... Reservations § 552.64 Sound insurance underwriting and programing. The Department of the Army encourages the acquisition of a sound insurance program that is suitably underwritten to meet the varying needs of the...
33 CFR 110.146 - Long Island Sound.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Long Island Sound. 110.146... ANCHORAGE REGULATIONS Anchorage Grounds § 110.146 Long Island Sound. (a) Anchorage grounds. (1) Bridgeport Anchorage Ground. That portion of Long Island Sound enclosed by a line connecting the following points...
33 CFR 161.60 - Vessel Traffic Service Prince William Sound.
Code of Federal Regulations, 2012 CFR
2012-07-01
... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...
33 CFR 110.146 - Long Island Sound.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Long Island Sound. 110.146... ANCHORAGE REGULATIONS Anchorage Grounds § 110.146 Long Island Sound. (a) Anchorage grounds. (1) Bridgeport Anchorage Ground. That portion of Long Island Sound enclosed by a line connecting the following points...
Noise Attenuation Performance Assessment of the Joint Helmet Mounted Cueing System (JHMCS)
2010-08-01
Flash Drive (CFD) memory (Figure 9) and Sound Professionals SP-TFB-2 Miniature Binaural Microphones with the Sound Professionals SP-SPSB-1 Slim-line...flight noise. Sound Professionals binaural microphones were placed to record both internal and external sounds. One microphone was attached to the