NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2003-10-01
Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.
Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington
Uhrich, M.A.; McGrath, T.S.
1997-01-01
Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.
Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.
Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin C.
2016-01-06
Underwaternoise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where soundsmore » created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. As a result, a comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels of different sizes and other underwater sound sources in both static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines onmore » the Mississippi River, where the sound of flowing water is included in background measurements. The size of vessels measured ranged from a small fishing boat with a 60 HP outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, and when compared to the sound created by an operating HK turbine were many times greater. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed values.« less
NASA Technical Reports Server (NTRS)
Lucas, Michael J.; Marcolini, Michael A.
1997-01-01
The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
The effect of spatial distribution on the annoyance caused by simultaneous sounds
NASA Astrophysics Data System (ADS)
Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas
2004-05-01
A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Bender, Christopher M; Ballard, Megan S; Wilson, Preston S
2014-06-01
The overall goal of this work is to quantify the effects of environmental variability and spatial sampling on the accuracy and uncertainty of estimates of the three-dimensional ocean sound-speed field. In this work, ocean sound speed estimates are obtained with acoustic data measured by a sparse autonomous observing system using a perturbative inversion scheme [Rajan, Lynch, and Frisk, J. Acoust. Soc. Am. 82, 998-1017 (1987)]. The vertical and horizontal resolution of the solution depends on the bandwidth of acoustic data and on the quantity of sources and receivers, respectively. Thus, for a simple, range-independent ocean sound speed profile, a single source-receiver pair is sufficient to estimate the water-column sound-speed field. On the other hand, an environment with significant variability may not be fully characterized by a large number of sources and receivers, resulting in uncertainty in the solution. This work explores the interrelated effects of environmental variability and spatial sampling on the accuracy and uncertainty of the inversion solution though a set of case studies. Synthetic data representative of the ocean variability on the New Jersey shelf are used.
Underwater sound of rigid-hulled inflatable boats.
Erbe, Christine; Liong, Syafrin; Koessler, Matthew Walter; Duncan, Alec J; Gourlay, Tim
2016-06-01
Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10-48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10-56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.
Human brain regions involved in recognizing environmental sounds.
Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A
2004-09-01
To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.
Different categories of living and non-living sound-sources activate distinct cortical networks
Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.
2009-01-01
With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134
Accuracy of assessing the level of impulse sound from distant sources.
Wszołek, Tadeusz; Kłaczyński, Maciej
2007-01-01
Impulse sound events are characterised by ultra high pressures and low frequencies. Lower frequency sounds are generally less attenuated over a given distance in the atmosphere than higher frequencies. Thus, impulse sounds can be heard over greater distances and will be more affected by the environment. To calculate a long-term average immission level it is necessary to apply weighting factors like the probability of the occurrence of each weather condition during the relevant time period. This means that when measuring impulse noise at a long distance it is necessary to follow environmental parameters in many points along the way sound travels and also to have a database of sound transfer functions in the long term. The paper analyses the uncertainty of immission measurement results of impulse sound from cladding and destroying explosive materials. The influence of environmental conditions on the way sound travels is the focus of this paper.
Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals
Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro
2012-01-01
Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497
Auditory performance in an open sound field
NASA Astrophysics Data System (ADS)
Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy
2003-04-01
Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088
Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.
Kogan, Pablo; Arenas, Jorge P; Bermejo, Fernando; Hinalaf, María; Turra, Bruno
2018-06-13
Urban soundscapes are dynamic and complex multivariable environmental systems. Soundscapes can be organized into three main entities containing the multiple variables: Experienced Environment (EE), Acoustic Environment (AE), and Extra-Acoustic Environment (XE). This work applies a multidimensional and synchronic data-collecting methodology at eight urban environments in the city of Córdoba, Argentina. The EE was assessed by means of surveys, the AE by acoustic measurements and audio recordings, and the XE by photos, video, and complementary sources. In total, 39 measurement locations were considered, where data corresponding to 61 AE and 203 EE were collected. Multivariate analysis and GIS techniques were used for data processing. The types of sound sources perceived, and their extents make up part of the collected variables that belong to the EE, i.e. traffic, people, natural sounds, and others. Sources explaining most of the variance were traffic noise and natural sounds. Thus, a Green Soundscape Index (GSI) is defined here as the ratio of the perceived extents of natural sounds to traffic noise. Collected data were divided into three ranges according to GSI value: 1) perceptual predominance of traffic noise, 2) balanced perception, and 3) perceptual predominance of natural sounds. For each group, three additional variables from the EE and three from the AE were applied, which reported significant differences, especially between ranges 1 and 2 with 3. These results confirm the key role of perceiving natural sounds in a town environment and also support the proposal of a GSI as a valuable indicator to classify urban soundscapes. In addition, the collected GSI-related data significantly helps to assess the overall soundscape. It is noted that this proposed simple perceptual index not only allows one to assess and classify urban soundscapes but also contributes greatly toward a technique for separating environmental sound sources. Copyright © 2018 Elsevier B.V. All rights reserved.
Lercher, Peter; De Coensel, Bert; Dekonink, Luc; Botteldooren, Dick
2017-01-01
Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641) shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance) the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution) or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate), up to 15% (base adjustments) up to 55% (full contextual model). The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway). Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to levels of 40 dBA,Lden to ensure the protection of quiet areas and prohibit the silent “filling up” of these areas with new sound sources. Eventually, to better predict the annoyance in the exposure range between 40 and 60 dBA and support the protection of quiet areas in city and rural areas in planning sound indicators need to be oriented at the noticeability of sound and consider other traffic related by-products (air quality, vibration, coping strain) in future studies and environmental impact assessments. PMID:28632198
By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants
Geangu, Elena; Quadrelli, Ermanno; Lewis, James W.; Macchi Cassia, Viola; Turati, Chiara
2015-01-01
Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011). Yet, little is known about the development of such specialization. Using event-related potentials (ERP), this study investigated neural correlates of 7-month-olds’ processing of human action (HA) sounds in comparison to human vocalizations (HV), environmental (ENV), and mechanical (MEC) sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV) led to significantly different response profiles compared to non-living sound sources (ENV + MEC) at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds. PMID:25732377
Mapping Underwater Sound in the Dutch Part of the North Sea.
Sertlek, H Özkan; Aarts, Geert; Brasseur, Sophie; Slabbekoorn, Hans; ten Cate, Carel; von Benda-Beckmann, Alexander M; Ainslie, Michael A
2016-01-01
The European Union requires member states to achieve or maintain good environmental status for their marine territorial waters and explicitly mentions potentially adverse effects of underwater sound. In this study, we focused on producing maps of underwater sound from various natural and anthropogenic origins in the Dutch North Sea. The source properties and sound propagation are simulated by mathematical methods. These maps could be used to assess and predict large-scale effects on behavior and distribution of underwater marine life and therefore become a valuable tool in assessing and managing the impact of underwater sound on marine life.
NASA Technical Reports Server (NTRS)
Lehnert, H.; Blauert, Jens; Pompetzki, W.
1991-01-01
In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.
Sex differences present in auditory looming perception, absent in auditory recession
NASA Astrophysics Data System (ADS)
Neuhoff, John G.; Seifritz, Erich
2005-04-01
When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.
Test Area C-64 Range Environmental Assessment, Revision 1
2010-10-01
DOI U.S. Department of the Interior DNL Day–Night Average Sound Level DU Depleted Uranium EBD Environmental Baseline Document EIAP Environmental...vulnerability, burning sensitivity, drop tests, bullet impact tests, sympathetic detonation tests, advanced warhead design tests, and depleted uranium (DU...land back to range use. Source: U.S. Air Force, 2009 DU = depleted uranium ; ERP = Environmental Restoration Program; LUC = land use control; RW
A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea
Lee, Norman; Elias, Damian O.; Mason, Andrew C.
2009-01-01
Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794
200 kHz Commercial Sonar Systems Generate Lower Frequency Side Lobes Audible to Some Marine Mammals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Zhiqun; Southall, Brandon; Carlson, Thomas J.
2014-04-15
The spectral properties of pulses transmitted by three commercially available 200 kHz echo sounders were measured to assess the possibility that sound energy in below the center (carrier) frequency might be heard by marine mammals. The study found that all three sounders generated sound at frequencies below the center frequency and within the hearing range of some marine mammals and that this sound was likely detectable by the animals over limited ranges. However, at standard operating source levels for the sounders, the sound below the center frequency was well below potentially harmful levels. It was concluded that the sounds generatedmore » by the sounders could affect the behavior of marine mammals within fairly close proximity to the sources and that that the blanket exclusion of echo sounders from environmental impact analysis based solely on the center frequency output in relation to the range of marine mammal hearing should be reconsidered.« less
Analysis of Nitrogen Loads From Long Island Sound Watersheds, 1988-98
NASA Astrophysics Data System (ADS)
Mullaney, J. R.; Trench, E. C.
2001-05-01
The U.S. Geological Survey (USGS) recently estimated annual nonpoint-source nitrogen loads from watersheds that drain to Long Island Sound. The study, was conducted in cooperation with the Connecticut Department of Environmental Protection, the New York State Department of Environmental Conservation and the U.S. Environmental Protection Agency, to assist these agencies with the issue of low concentrations of dissolved oxygen in Long Island Sound caused by nitrogen enrichment. A regression model was used to determine annual nitrogen loads at 27 streams monitored by the USGS during 1988-98. Estimates of nitrogen loads from municipal wastewater-treatment plants (where applicable) were subtracted from the total nitrogen loads to determine the nonpoint-source nitrogen load for each water-quality monitoring station. The nonpoint-source load information was applied to unmonitored areas by comparing the land-use and land-cover characteristics of monitored areas with unmonitored areas, and selecting basins that were most similar. In extrapolating load estimates to unmonitored areas, regional differences in mean annual runoff between monitored and unmonitored areas also were considered, using flow information from nearby USGS gaging stations. Estimates of nonpoint nitrogen loads from monitored areas with point sources of nitrogen discharge and estimates from unmonitored areas are subject to uncertainty. These estimates could be improved with additional data collection in coastal basins and in basins with a large percentage of urbanized land, measurements of instream transformation or losses of nitrogen, improved reporting of total nitrogen concentrations from municipal wastewater treatment facilities, and tracking of intrabasin and (or) interbasin diversion of water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, C.E.; Abbot, P.; Dyer, I.
1993-01-01
Noise levels from magnetically-levitated trains (maglev) at very high speed may be high enough to cause environmental noise impact in residential areas. Aeroacoustic sources dominate the sound at high speeds and guideway vibrations generate noticeable sound at low speed. In addition to high noise levels, the startle effect as a result of sudden onset of sound from a rapidly moving nearby maglev vehicle may lead to increased annoyance to neighbors of a maglev system. The report provides a base for determining the noise consequences and potential mitigation for a high speed maglev system in populated areas of the United States.more » Four areas are included in the study: (1) definition of noise sources; (2) development of noise criteria; (3) development of design guidelines; and (4) recommendations for a noise testing facility.« less
Environmentally Sound Small-Scale Agricultural Projects: Guidelines for Planning. Revised Edition.
ERIC Educational Resources Information Center
Altieri, Miguel; Vukasin, Helen L., Ed.
Environmental planning requires more than finding the right technology and a source of funds. Planning involves consideration of the social, cultural, economic, and natural environments in which the project occurs. The challenge is to develop sustainable food systems that have reasonable production but do not degrade the resource base and upset…
Soundscapes and the sense of hearing of fishes.
Fay, Richard
2009-03-01
Underwater soundscapes have probably played an important role in the adaptation of ears and auditory systems of fishes throughout evolutionary time, and for all species. These sounds probably contain important information about the environment and about most objects and events that confront the receiving fish so that appropriate behavior is possible. For example, the sounds from reefs appear to be used by at least some fishes for their orientation and migration. These sorts of environmental sounds should be considered much like "acoustic daylight," that continuously bathes all environments and contain information that all organisms can potentially use to form a sort of image of the environment. At present, however, we are generally ignorant of the nature of ambient sound fields impinging on fishes, and the adaptive value of processing these fields to resolve the multiple sources of sound. Our field has focused almost exclusively on the adaptive value of processing species-specific communication sounds, and has not considered the informational value of ambient "noise." Since all fishes can detect and process acoustic particle motion, including the directional characteristics of this motion, underwater sound fields are potentially more complex and information-rich than terrestrial acoustic environments. The capacities of one fish species (goldfish) to receive and make use of such sound source information have been demonstrated (sound source segregation and auditory scene analysis), and it is suggested that all vertebrate species have this capacity. A call is made to better understand underwater soundscapes, and the associated behaviors they determine in fishes. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.
Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V
2006-02-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.
Material sound source localization through headphones
NASA Astrophysics Data System (ADS)
Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada
2012-09-01
In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.
Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N.
2012-01-01
Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients. PMID:22891070
NASA Technical Reports Server (NTRS)
Conner, David A.; Page, Juliet A.
2002-01-01
To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.
Kastelein, R A; Verboom, W C; Muijsers, M; Jennings, N V; van der Heul, S
2005-05-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network is currently under development: Acoustic Communication network for Monitoring of underwater Environment in coastal areas (ACME). Marine mammals might be affected by ACME sounds since they use sounds of similar frequencies (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour porpoise. Therefore, as part of an environmental impact assessment program, two captive harbour porpoises were subjected to four sounds, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' positions and respiration rates during a test period with those during a baseline period. Each of the four sounds could be made a deterrent by increasing the amplitude of the sound. The porpoises reacted by swimming away from the sounds and by slightly, but significantly, increasing their respiration rate. From the sound pressure level distribution in the pen, and the distribution of the animals during test sessions, discomfort sound level thresholds were determined for each sound. In combination with information on sound propagation in the areas where the communication system may be deployed, the extent of the 'discomfort zone' can be estimated for several source levels (SLs). The discomfort zone is defined as the area around a sound source that harbour porpoises are expected to avoid. Based on these results, SLs can be selected that have an acceptable effect on harbour porpoises in particular areas. The discomfort zone of a communication sound depends on the selected sound, the selected SL, and the propagation characteristics of the area in which the sound system is operational. In shallow, winding coastal water courses, with sandbanks, etc., the type of habitat in which the ACME sounds will be produced, propagation loss cannot be accurately estimated by using a simple propagation model, but should be measured on site. The SL of the communication system should be adapted to each area (taking into account bounding conditions created by narrow channels, sound propagation variability due to environmental factors, and the importance of an area to the affected species). The discomfort zone should not prevent harbour porpoises from spending sufficient time in ecologically important areas (for instance feeding areas), or routes towards these areas.
Study of the Acoustic Effects of Hydrokinetic Tidal Turbines in Admiralty Inlet, Puget Sound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian Polagye; Jim Thomson; Chris Bassett
2012-03-30
Hydrokinetic turbines will be a source of noise in the marine environment - both during operation and during installation/removal. High intensity sound can cause injury or behavioral changes in marine mammals and may also affect fish and invertebrates. These noise effects are, however, highly dependent on the individual marine animals; the intensity, frequency, and duration of the sound; and context in which the sound is received. In other words, production of sound is a necessary, but not sufficient, condition for an environmental impact. At a workshop on the environmental effects of tidal energy development, experts identified sound produced by turbinesmore » as an area of potentially significant impact, but also high uncertainty. The overall objectives of this project are to improve our understanding of the potential acoustic effects of tidal turbines by: (1) Characterizing sources of existing underwater noise; (2) Assessing the effectiveness of monitoring technologies to characterize underwater noise and marine mammal responsiveness to noise; (3) Evaluating the sound profile of an operating tidal turbine; and (4) Studying the effect of turbine sound on surrogate species in a laboratory environment. This study focuses on a specific case study for tidal energy development in Admiralty Inlet, Puget Sound, Washington (USA), but the methodologies and results are applicable to other turbine technologies and geographic locations. The project succeeded in achieving the above objectives and, in doing so, substantially contributed to the body of knowledge around the acoustic effects of tidal energy development in several ways: (1) Through collection of data from Admiralty Inlet, established the sources of sound generated by strong currents (mobilizations of sediment and gravel) and determined that low-frequency sound recorded during periods of strong currents is non-propagating pseudo-sound. This helped to advance the debate within the marine and hydrokinetics acoustic community as to whether strong currents produce propagating sound. (2) Analyzed data collected from a tidal turbine operating at the European Marine Energy Center to develop a profile of turbine sound and developed a framework to evaluate the acoustic effects of deploying similar devices in other locations. This framework has been applied to Public Utility District No. 1 of Snohomish Country's demonstration project in Admiralty Inlet to inform postinstallation acoustic and marine mammal monitoring plans. (3) Demonstrated passive acoustic techniques to characterize the ambient noise environment at tidal energy sites (fixed, long-term observations recommended) and characterize the sound from anthropogenic sources (drifting, short-term observations recommended). (4) Demonstrated the utility and limitations of instrumentation, including bottom mounted instrumentation packages, infrared cameras, and vessel monitoring systems. In doing so, also demonstrated how this type of comprehensive information is needed to interpret observations from each instrument (e.g., hydrophone data can be combined with vessel tracking data to evaluate the contribution of vessel sound to ambient noise). (5) Conducted a study that suggests harbor porpoise in Admiralty Inlet may be habituated to high levels of ambient noise due to omnipresent vessel traffic. The inability to detect behavioral changes associated with a high intensity source of opportunity (passenger ferry) has informed the approach for post-installation marine mammal monitoring. (6) Conducted laboratory exposure experiments of juvenile Chinook salmon and showed that exposure to a worse than worst case acoustic dose of turbine sound does not result in changes to hearing thresholds or biologically significant tissue damage. Collectively, this means that Chinook salmon may be at a relatively low risk of injury from sound produced by tidal turbines located in or near their migration path. In achieving these accomplishments, the project has significantly advanced the District's goals of developing a demonstration-scale tidal energy project in Admiralty Inlet. Pilot demonstrations of this type are an essential step in the development of commercial-scale tidal energy in the United States. This is a renewable resource capable of producing electricity in a highly predictable manner.« less
Schäffer, Beat; Pieren, Reto; Schlittmeier, Sabine J; Brink, Mark
2018-05-19
Environmental noise from transportation or industrial infrastructure typically has a broad frequency range. Different sources may have disparate acoustical characteristics, which may in turn affect noise annoyance. However, knowledge of the relative contribution of the different acoustical characteristics of broadband noise to annoyance is still scarce. In this study, the subjectively perceived short-term (acute) annoyance reactions to different broadband sounds (namely, realistic outdoor wind turbine and artificial, generic sounds) at 40 dBA were investigated in a controlled laboratory listening experiment. Combined with the factorial design of the experiment, the sounds allowed for separation of the effects of three acoustical characteristics on annoyance, namely, spectral shape, depth of periodic amplitude modulation (AM), and occurrence (or absence) of random AM. Fifty-two participants rated their annoyance with the sounds. Annoyance increased with increasing energy content in the low-frequency range as well as with depth of periodic AM, and was higher in situations with random AM than without. Similar annoyance changes would be evoked by sound pressure level changes of up to 8 dB. The results suggest that besides standard sound pressure level metrics, other acoustical characteristics of (broadband) noise should also be considered in environmental impact assessments, e.g., in the context of wind turbine installations.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
Neural plasticity associated with recently versus often heard objects.
Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie
2012-09-01
In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.
Predicting Anthropogenic Noise Contributions to US Waters.
Gedamke, Jason; Ferguson, Megan; Harrison, Jolie; Hatch, Leila; Henderson, Laurel; Porter, Michael B; Southall, Brandon L; Van Parijs, Sofie
2016-01-01
To increase understanding of the potential effects of chronic underwater noise in US waters, the National Oceanic and Atmospheric Administration (NOAA) organized two working groups in 2011, collectively called "CetSound," to develop tools to map the density and distribution of cetaceans (CetMap) and predict the contribution of human activities to underwater noise (SoundMap). The SoundMap effort utilized data on density, distribution, acoustic signatures of dominant noise sources, and environmental descriptors to map estimated temporal, spatial, and spectral contributions to background noise. These predicted soundscapes are an initial step toward assessing chronic anthropogenic noise impacts on the ocean's varied acoustic habitats and the animals utilizing them.
Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae
Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063
Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.
Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.
DOT National Transportation Integrated Search
2012-08-19
Excessive anthropogenic noise has been associated with annoyance, disruption of sleep and cognitive processes, hearing impairment, and adverse impacts on cardiovascular and endocrine systems. Although transportation is a major source of noise, nation...
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Environmental Assessment of Installation Development at McConnell Air Force Base, Kansas
2007-05-01
characteristics of the noise source, distance between source and receptor, receptor sensitivity, weather , and time of day. Sound is measured with...bulk fuel storage and transfer, fuel dispensing, service stations , solvent degreasing, surface coating, and chemical usage/fugitive emissions. The...and weathered Permian bedrock. The deeper aquifer is within calcareous shales of the Wellington Formation. Groundwater flow follows the local
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Jenison, Rick
1995-01-01
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
What the R & D Man Needs in Scientific Furniture and Why.
ERIC Educational Resources Information Center
Hallenberg, E.X.
1964-01-01
The complexity of today's research laboratories requires a completely controllable environmental surrounding. Research facilities require carefully controlled temperatures, relative humidity, and moisture, and must be free of interference from air-borne sound, mechanical vibrations, and electrical sources. Also desirable are special power…
López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón
2014-01-15
Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.
Behavioral response of manatees to variations in environmental sound levels
Miksis-Olds, Jennifer L.; Wagner, Tyler
2011-01-01
Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.
Environmental Sound Training in Cochlear Implant Users
Sheft, Stanley; Kuvadia, Sejal; Gygi, Brian
2015-01-01
Purpose The study investigated the effect of a short computer-based environmental sound training regimen on the perception of environmental sounds and speech in experienced cochlear implant (CI) patients. Method Fourteen CI patients with the average of 5 years of CI experience participated. The protocol consisted of 2 pretests, 1 week apart, followed by 4 environmental sound training sessions conducted on separate days in 1 week, and concluded with 2 posttest sessions, separated by another week without training. Each testing session included an environmental sound test, which consisted of 40 familiar everyday sounds, each represented by 4 different tokens, as well as the Consonant Nucleus Consonant (CNC) word test, and Revised Speech Perception in Noise (SPIN-R) sentence test. Results Environmental sounds scores were lower than for either of the speech tests. Following training, there was a significant average improvement of 15.8 points in environmental sound perception, which persisted 1 week later after training was discontinued. No significant improvements were observed for either speech test. Conclusions The findings demonstrate that environmental sound perception, which remains problematic even for experienced CI patients, can be improved with a home-based computer training regimen. Such computer-based training may thus provide an effective low-cost approach to rehabilitation for CI users, and potentially, other hearing impaired populations. PMID:25633579
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Worldwide Emerging Environmental Issues Affecting the U.S. Military. November 2005 Report
2005-11-01
rapid development. At the program’s launch festivity, the need for developing an international e- waste recycling systems along with transparent...electronic equipment. Sources: Roadmap Set for the Environmentally Sound Management of Electronic Waste in Asia-Pacific under the Basel Convention...34 Tom Dunne, of the agency’s Office of Solid Waste and Emergency Response, wrote in an e-mail message. 4.5 Sunk Weapons Represent a Growing
A Lexical Analysis of Environmental Sound Categories
ERIC Educational Resources Information Center
Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel
2012-01-01
In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…
Mathematically trivial control of sound using a parametric beam focusing source.
Tanaka, Nobuo; Tanaka, Motoki
2011-01-01
By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.
A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene
Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling
2016-01-01
A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239
Environmental issues: the challenge for the chief executive.
Hutchinson, C
1992-06-01
Businesses are under pressure to adopt environmental policies and incorporate them into their strategic business planning as a matter of routine. These pressures are coming from at least five sources--stricter legislation, consumer demand, competitive advantage, staff concerns and community pressure. The challenge is enormous but there is growing evidence that sound environmental management provides pay-off in bottom line results. Business organizations have a vital role to play and its good for them. There are opportunities for new business as well as threats to those organizations which continue to ignore the trends.
Assessment and control design for steam vent noise in an oil refinery.
Monazzam, Mohammad Reza; Golmohammadi, Rostam; Nourollahi, Maryam; Momen Bellah Fard, Samaneh
2011-06-13
Noise is one of the most important harmful agents in work environment. Noise pollution in oil refinery industries is related to workers' health. This study aimed to determine the overall noise pollution of an oil refinery operation and its frequency analysis to determine the control plan for a vent noise in these industries. This experimental study performed in control unit of Tehran Oil Refinery in 2008. To determine the noise distributions, environmental noise measurements were carried out by lattice method according to basic information and technical process. The sound pressure level and frequency distribution was measured for each study sources subject separately was performed individually. According to the vent's specification, the measured steam noise characteristics reviewed and compared to the theoretical results of steam noise estimation. Eventually, a double expansion muffler was designed. Data analysis and graphical design were carried out using Excel software. The results of environmental noise measurements indicated that the level of sound pressure was above the national permitted level (85 dB (A)). The Mean level of sound pressure of the studied steam jet was 90.3 dB (L). The results of noise frequency analysis for the steam vents showed that the dominant frequency was 4000 Hz. To obtain 17 dB noise reductions, a double chamber aluminum muffler with 500 mm length and 200 mm diameter consisting pipe drilled was designed. The characteristics of steam vent noise were separated from other sources, a double expansion muffler was designed using a new method based on the level of steam noise, and principle sound frequency, a double expansion muffler was designed.
Spatio-Temporal Evolution of Sound Speed Channels on the Chukchi Shelf
NASA Astrophysics Data System (ADS)
Eickmeier, J.; Badiey, M.; Wan, L.
2017-12-01
The physics of an acoustic waveguide are influenced by various boundary conditions as well as spatial and temporal fluctuations in temperature and salinity profiles the water column. The shallow water Canadian Basin Acoustic Propagation Experiment (CANAPE) experiment was designed to study the effect of oceanographic variability on the acoustic field. A pilot study was conducted in the summer of 2015, full deployment of acoustic and environmental moorings took place in 2016, and recovery will occur in late 2017. An example of strong oceanographic variability in the SW region is depicted in Figure 1. Over the course of 7 days, warm Bering Sea water arrived on the Chukchi Shelf and sank in the water column to between 25 m and 125 m depth. This warm water spread to a range of 10 km and a potential eddy of warm water formed causing an increase in sound speed between 15 km and 20 km range in Fig. 1(b). Due to the increased sound speed, a strong sound channel evolved between 100 m and 200 m for acoustic waves arriving from off the shelf, deep water sources. In Fig. 1(a), the initial formation of the acoustic channel is only evident in 50 m to 100 m of water out to a range of 5 km. Recorded environmental data will be used to study fluctuations in sound speed channel formation on the Chukchi Shelf. Data collected in 2015 and 2016 have shown sound duct evolution over 7 days and over a one-month period. Analysis is projected to show sound channel formation over a new range of spatio-temporal scales. This analysis will show a cycle of sound channels opening and closing on the shelf, where this cycle strongly influences the propagation path, range and attenuation of acoustic waves.
Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-03-13
A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Effects of environmental sounds on the guessability of animated graphic symbols.
Harmon, Ashley C; Schlosser, Ralf W; Gygi, Brian; Shane, Howard C; Kong, Ying-Yee; Book, Lorraine; Macduff, Kelly; Hearn, Emilia
2014-12-01
Graphic symbols are a necessity for pre-literate children who use aided augmentative and alternative communication (AAC) systems (including non-electronic communication boards and speech generating devices), as well as for mobile technologies using AAC applications. Recently, developers of the Autism Language Program (ALP) Animated Graphics Set have added environmental sounds to animated symbols representing verbs in an attempt to enhance their iconicity. The purpose of this study was to examine the effects of environmental sounds (added to animated graphic symbols representing verbs) in terms of naming. Participants included 46 children with typical development between the ages of 3;0 to 3;11 (years;months). The participants were randomly allocated to a condition of symbols with environmental sounds or a condition without environmental sounds. Results indicated that environmental sounds significantly enhanced the naming accuracy of animated symbols for verbs. Implications in terms of symbol selection, symbol refinement, and future symbol development will be discussed.
ERIC Educational Resources Information Center
Giordano, Bruno L.; McDonnell, John; McAdams, Stephen
2010-01-01
The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental…
Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
40 CFR 123.32 - Request by an Indian Tribe for a determination of eligibility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... exercise of police powers affecting (or relating to) the health, safety, and welfare of the affected population; taxation; and the exercise of the power of eminent domain; and (3) Identify the source of the... capability of the Indian Tribe to administer an effective, environmentally sound NPDES permit program. The...
A Survey on the Feasibility of Sound Classification on Wireless Sensor Nodes
Salomons, Etto L.; Havinga, Paul J. M.
2015-01-01
Wireless sensor networks are suitable to gain context awareness for indoor environments. As sound waves form a rich source of context information, equipping the nodes with microphones can be of great benefit. The algorithms to extract features from sound waves are often highly computationally intensive. This can be problematic as wireless nodes are usually restricted in resources. In order to be able to make a proper decision about which features to use, we survey how sound is used in the literature for global sound classification, age and gender classification, emotion recognition, person verification and identification and indoor and outdoor environmental sound classification. The results of the surveyed algorithms are compared with respect to accuracy and computational load. The accuracies are taken from the surveyed papers; the computational loads are determined by benchmarking the algorithms on an actual sensor node. We conclude that for indoor context awareness, the low-cost algorithms for feature extraction perform equally well as the more computationally-intensive variants. As the feature extraction still requires a large amount of processing time, we present four possible strategies to deal with this problem. PMID:25822142
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
Correlation between Identification Accuracy and Response Confidence for Common Environmental Sounds
set of environmental sounds with stimulus control and precision. The present study is one in a series of efforts to provide a baseline evaluation of a...sounds from six broad categories: household items, alarms, animals, human generated, mechanical, and vehicle sounds. Each sound was presented five times
Sound level exposure of high-risk infants in different environmental conditions.
Byers, Jacqueline F; Waugh, W Randolph; Lowman, Linda B
2006-01-01
To provide descriptive information about the sound levels to which high-risk infants are exposed in various actual environmental conditions in the NICU, including the impact of physical renovation on sound levels, and to assess the contributions of various types of equipment, alarms, and activities to sound levels in simulated conditions in the NICU. Descriptive and comparative design. Convenience sample of 134 infants at a southeastern quarternary children's hospital. A-weighted decibel (dBA) sound levels under various actual and simulated environmental conditions. The renovated NICU was, on average, 4-6 dBA quieter across all environmental conditions than a comparable nonrenovated room, representing a significant sound level reduction. Sound levels remained above consensus recommendations despite physical redesign and staff training. Respiratory therapy equipment, alarms, staff talking, and infant fussiness contributed to higher sound levels. Evidence-based sound-reducing strategies are proposed. Findings were used to plan environment management as part of a developmental, family-centered care, performance improvement program and in new NICU planning.
Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
Bjørgesaeter, Anders; Ugland, Karl Inne; Bjørge, Arne
2004-10-01
The male harbor seal (Phoca vitulina) produces broadband nonharmonic vocalizations underwater during the breeding season. In total, 120 vocalizations from six colonies were analyzed to provide a description of the acoustic structure and for the presence of geographic variation. The complex harbor seal vocalizations may be described by how the frequency bandwidth varies over time. An algorithm that identifies the boundaries between noise and signal from digital spectrograms was developed in order to extract a frequency bandwidth contour. The contours were used as inputs for multivariate analysis. The vocalizations' sound types (e.g., pulsed sound, whistle, and broadband nonharmonic sound) were determined by comparing the vocalizations' spectrographic representations with sound waves produced by known sound sources. Comparison between colonies revealed differences in the frequency contours, as well as some geographical variation in use of sound types. The vocal differences may reflect a limited exchange of individuals between the six colonies due to long distances and strong site fidelity. Geographically different vocal repertoires have potential for identifying discrete breeding colonies of harbor seals, but more information is needed on the nature and extent of early movements of young, the degree of learning, and the stability of the vocal repertoire. A characteristic feature of many vocalizations in this study was the presence of tonal-like introductory phrases that fit into the categories pulsed sound and whistles. The functions of these phrases are unknown but may be important in distance perception and localization of the sound source. The potential behavioral consequences of the observed variability may be indicative of adaptations to different environmental properties influencing determination of distance and direction and plausible different male mating tactics.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
Duda, Timothy F; Lin, Ying-Tsong; Reeder, D Benjamin
2011-09-01
A study of 400 Hz sound focusing and ducting effects in a packet of curved nonlinear internal waves in shallow water is presented. Sound propagation roughly along the crests of the waves is simulated with a three-dimensional parabolic equation computational code, and the results are compared to measured propagation along fixed 3 and 6 km source/receiver paths. The measurements were made on the shelf of the South China Sea northeast of Tung-Sha Island. Construction of the time-varying three-dimensional sound-speed fields used in the modeling simulations was guided by environmental data collected concurrently with the acoustic data. Computed three-dimensional propagation results compare well with field observations. The simulations allow identification of time-dependent sound forward scattering and ducting processes within the curved internal gravity waves. Strong acoustic intensity enhancement was observed during passage of high-amplitude nonlinear waves over the source/receiver paths, and is replicated in the model. The waves were typical of the region (35 m vertical displacement). Two types of ducting are found in the model, which occur asynchronously. One type is three-dimensional modal trapping in deep ducts within the wave crests (shallow thermocline zones). The second type is surface ducting within the wave troughs (deep thermocline zones). © 2011 Acoustical Society of America
Recognition and characterization of unstructured environmental sounds
NASA Astrophysics Data System (ADS)
Chu, Selina
2011-12-01
Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to exploit and label new unlabeled audio data. The final components of my thesis will involve investigating on learning sound structures for generalization and applying the proposed ideas to context aware applications. The inherent nature of environmental sound is noisy and contains relatively large amounts of overlapping events between different environments. Environmental sounds contain large variances even within a single environment type, and frequently, there are no divisible or clear boundaries between some types. Traditional methods of classification are generally not robust enough to handle classes with overlaps. This audio, hence, requires representation by complex models. Using deep learning architecture provides a way to obtain a generative model-based method for classification. Specifically, I considered the use of Deep Belief Networks (DBNs) to model environmental audio and investigate its applicability with noisy data to improve robustness and generalization. A framework was proposed using composite-DBNs to discover high-level representations and to learn a hierarchical structure for different acoustic environments in a data-driven fashion. Experimental results on real data sets demonstrate its effectiveness over traditional methods with over 90% accuracy on recognition for a high number of environmental sound types.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
A method for evaluating the relation between sound source segregation and masking
Lutfi, Robert A.; Liu, Ching-Ju
2011-01-01
Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979
Subjective scaling of spatial room acoustic parameters influenced by visual environmental cues
Valente, Daniel L.; Braasch, Jonas
2010-01-01
Although there have been numerous studies investigating subjective spatial impression in rooms, only a few of those studies have addressed the influence of visual cues on the judgment of auditory measures. In the psychophysical study presented here, video footage of five solo music∕speech performers was shown for four different listening positions within a general-purpose space. The videos were presented in addition to the acoustic signals, which were auralized using binaural room impulse responses (BRIR) that were recorded in the same general-purpose space. The participants were asked to adjust the direct-to-reverberant energy ratio (D∕R ratio) of the BRIR according to their expectation considering the visual cues. They were also directed to rate the apparent source width (ASW) and listener envelopment (LEV) for each condition. Visual cues generated by changing the sound-source position in the multi-purpose space, as well as the makeup of the sound stimuli affected the judgment of spatial impression. Participants also scaled the direct-to-reverberant energy ratio with greater direct sound energy than was measured in the acoustical environment. PMID:20968367
Golden, Hannah L; Downey, Laura E; Fletcher, Philip D; Mahoney, Colin J; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D
2015-05-15
Recognition of nonverbal sounds in semantic dementia and other syndromes of anterior temporal lobe degeneration may determine clinical symptoms and help to define phenotypic profiles. However, nonverbal auditory semantic function has not been widely studied in these syndromes. Here we investigated semantic processing in two key nonverbal auditory domains - environmental sounds and melodies - in patients with semantic dementia (SD group; n=9) and in patients with anterior temporal lobe atrophy presenting with behavioural decline (TL group; n=7, including four cases with MAPT mutations) in relation to healthy older controls (n=20). We assessed auditory semantic performance in each domain using novel, uniform within-modality neuropsychological procedures that determined sound identification based on semantic classification of sound pairs. Both the SD and TL groups showed comparable overall impairments of environmental sound and melody identification; individual patients generally showed superior identification of environmental sounds than melodies, however relative sparing of melody over environmental sound identification also occurred in both groups. Our findings suggest that nonverbal auditory semantic impairment is a common feature of neurodegenerative syndromes with anterior temporal lobe atrophy. However, the profile of auditory domain involvement varies substantially between individuals. Copyright © 2015. Published by Elsevier B.V.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2003-01-01
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C
2013-05-21
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2007-10-16
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
ERIC Educational Resources Information Center
Leech, Robert; Saygin, Ayse Pinar
2011-01-01
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…
Environmental Sound Training in Cochlear Implant Users
ERIC Educational Resources Information Center
Shafiro, Valeriy; Sheft, Stanley; Kuvadia, Sejal; Gygi, Brian
2015-01-01
Purpose: The study investigated the effect of a short computer-based environmental sound training regimen on the perception of environmental sounds and speech in experienced cochlear implant (CI) patients. Method: Fourteen CI patients with the average of 5 years of CI experience participated. The protocol consisted of 2 pretests, 1 week apart,…
Effect of real-world sounds on protein crystallization.
Zhang, Chen-Yan; Liu, Yue; Tian, Xu-Hua; Liu, Wen-Jing; Li, Xiao-Yu; Yang, Li-Xue; Jiang, Han-Jun; Han, Chong; Chen, Ke-An; Yin, Da-Chuan
2018-06-01
Protein crystallization is sensitive to the environment, while audible sound, as a physical and environmental factor during the entire process, is always ignored. We have previously reported that protein crystallization can be affected by a computer-generated monotonous sound with fixed frequency and amplitude. However, real-world sounds are not so simple but are complicated by parameters (frequency, amplitude, timbre, etc.) that vary over time. In this work, from three sound categories (music, speech, and environmental sound), we selected 26 different sounds and evaluated their effects on protein crystallization. The correlation between the sound parameters and the crystallization success rate was studied mathematically. The results showed that the real-world sounds, similar to the artificial monotonous sounds, could not only affect protein crystallization, but also improve crystal quality. Crystallization was dependent not only on the frequency, amplitude, volume, irradiation time, and overall energy of the sounds but also on their spectral characteristics. Based on these results, we suggest that intentionally applying environmental sound may be a simple and useful tool to promote protein crystallization. Copyright © 2018. Published by Elsevier B.V.
Spherical loudspeaker array for local active control of sound.
Rafaely, Boaz
2009-05-01
Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.
Localizing the sources of two independent noises: Role of time varying amplitude differences
Yost, William A.; Brown, Christopher A.
2013-01-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597
Localizing the sources of two independent noises: role of time varying amplitude differences.
Yost, William A; Brown, Christopher A
2013-04-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
200 kHz Commercial Sonar Systems Generate Lower Frequency Side Lobes Audible to Some Marine Mammals
Deng, Z. Daniel; Southall, Brandon L.; Carlson, Thomas J.; Xu, Jinshan; Martinez, Jayson J.; Weiland, Mark A.; Ingraham, John M.
2014-01-01
The spectral properties of pulses transmitted by three commercially available 200 kHz echo sounders were measured to assess the possibility that marine mammals might hear sound energy below the center (carrier) frequency that may be generated by transmitting short rectangular pulses. All three sounders were found to generate sound at frequencies below the center frequency and within the hearing range of some marine mammals, e.g. killer whales, false killer whales, beluga whales, Atlantic bottlenose dolphins, harbor porpoises, and others. The frequencies of these sub-harmonic sounds ranged from 90 to 130 kHz. These sounds were likely detectable by the animals over distances up to several hundred meters but were well below potentially harmful levels. The sounds generated by the sounders could potentially affect the behavior of marine mammals within fairly close proximity to the sources and therefore the exclusion of echo sounders from environmental impact analysis based solely on the center frequency output in relation to the range of marine mammal hearing should be reconsidered. PMID:24736608
Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander
2015-12-01
This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.
Microbiological quality of Puget Sound Basin streams and identification of contaminant sources
Embrey, S.S.
2001-01-01
Fecal coliforms, Escherichia coli, enterococci, and somatic coliphages were detected in samples from 31 sites on streams draining urban and agricultural regions of the Puget Sound Basin Lowlands. Densities of bacteria in 48 and 71 percent of the samples exceeded U.S. Environmental Protection Agency's freshwater recreation criteria for Escherichia coli and enterococci, respectively, and 81 percent exceeded Washington State fecal coliform standards. Male-specific coliphages were detected in samples from 15 sites. Male-specific F+RNA coliphages isolated from samples taken at South Fork Thornton and Longfellow Creeks were serotyped as Group II, implicating humans as potential contaminant sources. These two sites are located in residential, urban areas. F+RNA coliphages in samples from 10 other sites, mostly in agricultural or rural areas, were serotyped as Group I, implicating non-human animals as likely sources. Chemicals common to wastewater, including fecal sterols, were detected in samples from several urban streams, and also implicate humans, at least in part, as possible sources of fecal bacteria and viruses to the streams.
Auditory object perception: A neurobiological model and prospective review.
Brefczynski-Lewis, Julie A; Lewis, James W
2017-10-01
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein. Copyright © 2017. Published by Elsevier Ltd.
Issues in Humanoid Audition and Sound Source Localization by Active Audition
NASA Astrophysics Data System (ADS)
Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki
In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.
Sound source localization method in an environment with flow based on Amiet-IMACS
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
Recording and Calculating Gunshot Sound—Change of the Volume in Reference to the Distance
NASA Astrophysics Data System (ADS)
Nikolaos, Tsiatis E.
2010-01-01
An experiment was conducted in an open practice ground (shooting range) regarding the recording of the sound of gunshots. Shots were fired using various types of firearms (seven pistols, five revolvers, two submachine guns, one rifle, and one shotgun) in different calibers, from several various distances with reference to the recording sources. Both, a conventional sound level meter (device) and a measurement microphone were used, having been placed in a fixed point behind the shooting line. The sound of each shot was recorded (from the device). At the same time the signal received by the microphone was transferred to a connected computer through an appropriate audio interface with a pre-amplifier. Each sound wave was stored and depicted as a wave function. After the physic-mathematical analysis of these depictions, the volume was calculated in the accepted engineering units(Decibels or dB) of Sound Pressure Level (SPL). The distances from the recording sources were 9.60 meters, 14.40 m, 19.20 m, and 38.40 m. The experiment was carried out by using the following calibers: .22 LR, 6.35 mm(.25 AUTO), 7.62 mm Tokarev(7,62×25), 7.65 mm(.32 AUTO), 9 mm Parabellum(9×19), 9 mm Short(9×17), 9 mm Makarov(9×18), .45 AUTO, .32 S&W, .38 S&W, .38 SPECIAL, .357 Magnum, 7,62 mm Kalashnikov(7,62×39) and 12 GA. Tables are given for the environmental conditions (temperature, humidity, altitude & barometric pressure), the length of the barrel of each gun, technical characteristics of the used ammunition, as well as for the volume taken from the SLM. The data for the sound intensity were collected after 168 gunshots (158 single shot & 10 bursts). According to the results, a decreasing of the volume, equivalent to the increasing of the distance, was remarked, as it was expected. Values seem to follow the Inverse square Law. For every doubling of the distance from the sound source, the sound intensity diminishes by 5.9904±0.2325 decibels (on average). In addition, we have the chance of determining the volume of the gunshot sound coming from a certain type of weapon. A further application could be the calculation of the distance from a shooting firearm if one is aware of a recorded volume.
Acoustic deterrence of bighead carp (Hypophthalmichthys nobilis) to a broadband sound stimulus
Vetter, Brooke J.; Murchy, Kelsie; Cupp, Aaron R.; Amberg, Jon J.; Gaikowski, Mark P.; Mensinger, Allen F.
2017-01-01
Recent studies have shown the potential of acoustic deterrents against invasive silver carp (Hypophthalmichthys molitrix). This study examined the phonotaxic response of the bighead carp (H. nobilis) to pure tones (500–2000 Hz) and playbacks of broadband sound from an underwater recording of a 100 hp outboard motor (0.06–10 kHz) in an outdoor concrete pond (10 × 5 × 1.2 m) at the U.S. Geological Survey Upper Midwest Environmental Science Center in La Crosse, WI. The number of consecutive times the fish reacted to sound from alternating locations at each end of the pond was assessed. Bighead carp were relatively indifferent to the pure tones with median consecutive responses ranging from 0 to 2 reactions away from the sound source. However, fish consistently exhibited significantly (P < 0.001) greater negative phonotaxis to the broadband sound (outboard motor recording) with an overall median response of 20 consecutive reactions during the 10 min trials. In over 50% of broadband sound tests, carp were still reacting to the stimulus at the end of the trial, implying that fish were not habituating to the sound. This study suggests that broadband sound may be an effective deterrent to bighead carp and provides a basis for conducting studies with wild fish.
Neonatal incubators: a toxic sound environment for the preterm infant?*.
Marik, Paul E; Fuller, Christopher; Levitov, Alexander; Moll, Elizabeth
2012-11-01
High sound pressure levels may be harmful to the maturing newborn. Current guidelines suggest that the sound pressure levels within a neonatal intensive care unit should not exceed 45 dB(A). It is likely that environmental noise as well as the noise generated by the incubator fan and respiratory equipment may contribute to the total sound pressure levels. Knowledge of the contribution of each component and source is important to develop effective strategies to reduce noise within the incubator. The objectives of this study were to determine the sound levels, sound spectra, and major sources of sound within a modern neonatal incubator (Giraffe Omnibed; GE Healthcare, Helsinki, Finland) using a sound simulation study to replicate the conditions of a preterm infant undergoing high-frequency jet ventilation (Life Pulse, Bunnell, UT). Using advanced sound data acquisition and signal processing equipment, we measured and analyzed the sound level at a dummy infant's ear and at the head level outside the enclosure. The sound data time histories were digitally acquired and processed using a digital Fast Fourier Transform algorithm to provide spectra of the sound and cumulative sound pressure levels (dBA). The simulation was done with the incubator cooling fan and ventilator switched on or off. In addition, tests were carried out with the enclosure sides closed and hood down and then with the enclosure sides open and the hood up to determine the importance of interior incubator reverberance on the interior sound levels With all the equipment off and the hood down, the sound pressure levels were 53 dB(A) inside the incubator. The sound pressure levels increased to 68 dB(A) with all equipment switched on (approximately 10 times louder than recommended). The sound intensity was 6.0 × 10(-8) watts/m(2); this sound level is roughly comparable with that generated by a kitchen exhaust fan on high. Turning the ventilator off reduced the overall sound pressure levels to 64 dB(A) and the sound pressure levels in the low-frequency band of 0 to 100 Hz were reduced by 10 dB(A). The incubator fan generated tones at 200, 400, and 600 Hz that raised the sound level by approximately 2 dB(A)-3 dB(A). Opening the enclosure (with all equipment turned on) reduced the sound levels above 50 Hz by reducing the revereberance within the enclosure. The sound levels, especially at low frequencies, within a modern incubator may reach levels that are likely to be harmful to the developing newborn. Much of the noise is at low frequencies and thus difficult to reduce by conventional means. Therefore, advanced forms of noise control are needed to address this issue.
Active room compensation for sound reinforcement using sound field separation techniques.
Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena
2018-03-01
This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Application of acoustic radiosity methods to noise propagation within buildings
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2005-09-01
The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.
Ejectable underwater sound source recovery assembly
NASA Technical Reports Server (NTRS)
Irick, S. C. (Inventor)
1974-01-01
An underwater sound source is described that may be ejectably mounted on any mobile device that travels over water, to facilitate in the location and recovery of the device when submerged. A length of flexible line maintains a connection between the mobile device and the sound source. During recovery, the sound source is located be particularly useful in the recovery of spent rocket motors that bury in the ocean floor upon impact.
The effect of brain lesions on sound localization in complex acoustic environments.
Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg
2014-05-01
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
Changing noise levels in a high CO2/lower pH ocean
NASA Astrophysics Data System (ADS)
Brewer, P. G.; Hester, K. C.; Peltzer, E. T.; Kirkwood, W. J.
2008-12-01
We show that ocean acidification from fossil fuel CO2 invasion and from increased respiration/reduced ventilation, has significantly reduced ocean sound absorption and thus increased ocean noise levels in the kHz frequency range. Below 10 kHz, sound absorption occurs due to well known chemical relaxations in the B(OH)3/B(OH)4- and HCO3-/CO32- systems. The pH dependence of these chemical relaxations results in decreased sound absorption (α = dB/km) as the ocean becomes more acidic from increased CO2 levels. The scale of surface ocean pH change today from the +105 ppmv change in atmospheric CO2 is about - 0.12 pH, resulting in frequency dependent decreases in sound absorption that now exceed 12% over pre- industrial. Under reasonable projections of future fossil fuel CO2 emissions and other sources a pH change of 0.3 units or more can be anticipated by mid-century, resulting in a decrease in α by almost 40%. Increases in water temperature have a smaller effect but also contribute to decreased sound absorption. Combining a lowering of 0.3 pH units with an increase of 3°C, α will decrease further to almost 45%. Ambient noise levels in the ocean within the auditory range critical for environmental, military, and economic interests are set to increase significantly due to the combined effects of decreased absorption and increasing sources from mankind's activities. Incorporation of sound absorption in modeling future ocean scenarios (R. Zeebe, personal communication) and long-term monitoring possibly with the aid of modern cabled observatories can give insights in how ocean noise will continue to change and its effect on groups such as marine mammals which communicate in the affected frequency range.
Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).
Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M
2013-07-01
The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.
Pitch features of environmental sounds
NASA Astrophysics Data System (ADS)
Yang, Ming; Kang, Jian
2016-07-01
A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.
Human emotions track changes in the acoustic environment.
Ma, Weiyi; Thompson, William Forde
2015-11-24
Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin's hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds.
Langer, William H.
2011-01-01
Although potential sources of aggregate are widespread throughout the United States, many sources may not meet certain physical property requirements, such as soundness, hardness, strength, porosity, and specific gravity, or they may contain contaminants or deleterious materials that render them unusable. Encroachment by conflicting land uses, permitting considerations, environmental issues, and societal pressures can prevent or limit development of otherwise suitable aggregate. The use of sustainable aggregate resource management can help ensure an economically viable supply of aggregate. Sustainable aggregate resource management techniques that have successfully been used include (1) protecting potential resources from encroachment; (2) using marginal-quality local aggregate for applications that do not demand a high-quality resource; (3) using substitute materials such as clinker, scoria, and recycled asphalt and concrete; and (4) using rail and water to transport aggregates from remote sources.
Dynamic Spatial Hearing by Human and Robot Listeners
NASA Astrophysics Data System (ADS)
Zhong, Xuan
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
Wave field synthesis of moving virtual sound sources with complex radiation properties.
Ahrens, Jens; Spors, Sascha
2011-11-01
An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.
A passive noise control approach utilizing air gaps with fibrous materials in the textile industry.
Monazzam-Esmaeelpour, Mohammad Reza; Hashemi, Zahra; Golmohammadi, Rostam; Zaredar, Narges
2014-01-01
Noise pollution is currently a major risk factor in industries in both developed and developing countries.The present study assessed noise pollution in the knitting industry in Iran in 2009 and presented a control method to reduce the rate of noise generation. The overall noise level was estimated using the network environmental noise assessment method in Sina Poud textile mill in Hamadan. Then, frequency analysis was performed at indicator target stations in the linear network. Finally, a suitable absorbent was recommended for the ceilings, walls, and aerial panels at three phases according to the results found for the sound source and destination environment. The results showed that the highest sound pressure level was 98.5 dB and the lowest was 95.1 dB. The dominant frequency for the industry was 500 Hz. The highest and lowest sound suppression was achieved by intervention at 4000 Hz equivalent to 14.6 dB and 250 Hz in the textile industry. When noise control at the source is not available or insufficient because of the wide distribution of the acoustic field in the workplace, the best option is to increase the absorptive surface of the workplace using adsorbents such as polystyrene.
Nordahl, Rolf; Turchet, Luca; Serafin, Stefania
2011-09-01
We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.
Wang, Chong
2018-03-01
In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0 is also given.
Noise Source Identification in a Reverberant Field Using Spherical Beamforming
NASA Astrophysics Data System (ADS)
Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang
Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.
Yang, Ming; De Coensel, Bert; Kang, Jian
2015-08-01
1/f noise or pink noise, which has been shown to be universal in nature, has also been observed in the temporal envelope of music, speech, and environmental sound. Moreover, the slope of the spectral density of the temporal envelope of music has been shown to correlate well to its pleasing, dull, or chaotic character. In this paper, the temporal structure of a number of instantaneous psychoacoustic parameters of environmental sound is examined in order to investigate whether a 1/f temporal structure appears in various types of sound that are generally preferred by people in everyday life. The results show, to some extent, that different categories of environmental sounds have different temporal structure characteristics. Only a number of urban sounds considered and birdsong, generally, exhibit 1/f behavior on short to medium duration time scales, i.e., from 0.1 s to 10 s, in instantaneous loudness and sharpness, whereas a more chaotic variation is found in birdsong at longer time scales, i.e., of 10 s-200 s. The other sound categories considered exhibit random or monotonic variations in the different time scales. In general, this study shows that a 1/f temporal structure is not necessarily present in environmental sounds that are commonly perceived as pleasant.
AUDIS wear: a smartwatch based assistive device for ubiquitous awareness of environmental sounds.
Mielke, Matthias; Bruck, Rainer
2016-08-01
A multitude of assistive devices is available for deaf people (i.e. deaf, deafened, and hard of hearing). Besides hearing and communication aids, devices to access environmental sounds are available commercially. But the devices have two major drawbacks: 1. they are targeted at indoor environments (e.g. home or work), and 2. only specific events are supported (e.g. the doorbell or telephone). Recent research shows that important sounds can occur in all contexts and that the interests in sounds are diverse. These drawbacks can be tackled by using modern information and communication technology that enables the development of new and improved assistive devices. The smartwatch, a new computing platform in the form of a wristwatch, offers new potential for assistive technology. Its design promises a perfect integration into various different social contexts and thus blends perfectly into the user's life. Based on a smartwatch and algorithms from pattern recognition, a prototype for awareness of environmental sounds is presented here. It observes the acoustic environment of the user and detects environmental sounds. A vibration is triggered when a sound is detected and the type of sound is shown on the display. The design of the prototype was discussed with deaf people in semi-structured interviews, leading to a set of implications for the design of such a device.
NASA Astrophysics Data System (ADS)
Maling, George C., Jr.
Recent advances in noise analysis and control theory and technology are discussed in reviews and reports. Topics addressed include noise generation; sound-wave propagation; noise control by external treatments; vibration and shock generation, transmission, isolation, and reduction; multiple sources and paths of environmental noise; noise perception and the physiological and psychological effects of noise; instrumentation, signal processing, and analysis techniques; and noise standards and legal aspects. Diagrams, drawings, graphs, photographs, and tables of numerical data are provided.
A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound
NASA Technical Reports Server (NTRS)
Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)
1996-01-01
The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.
2007-12-01
except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November
Techniques for Soundscape Retrieval and Synthesis
NASA Astrophysics Data System (ADS)
Mechtley, Brandon Michael
The study of acoustic ecology is concerned with the manner in which life interacts with its environment as mediated through sound. As such, a central focus is that of the soundscape: the acoustic environment as perceived by a listener. This dissertation examines the application of several computational tools in the realms of digital signal processing, multimedia information retrieval, and computer music synthesis to the analysis of the soundscape. Namely, these tools include a) an open source software library, Sirens, which can be used for the segmentation of long environmental field recordings into individual sonic events and compare these events in terms of acoustic content, b) a graph-based retrieval system that can use these measures of acoustic similarity and measures of semantic similarity using the lexical database WordNet to perform both text-based retrieval and automatic annotation of environmental sounds, and c) new techniques for the dynamic, realtime parametric morphing of multiple field recordings, informed by the geographic paths along which they were recorded.
Statistics of natural reverberation enable perceptual separation of sound and space
Traer, James; McDermott, Josh H.
2016-01-01
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730
Statistics of natural reverberation enable perceptual separation of sound and space.
Traer, James; McDermott, Josh H
2016-11-29
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
Innovative Approach for Developing Spacecraft Interior Acoustic Requirement Allocation
NASA Technical Reports Server (NTRS)
Chu, S. Reynold; Dandaroy, Indranil; Allen, Christopher S.
2016-01-01
The Orion Multi-Purpose Crew Vehicle (MPCV) is an American spacecraft for carrying four astronauts during deep space missions. This paper describes an innovative application of Power Injection Method (PIM) for allocating Orion cabin continuous noise Sound Pressure Level (SPL) limits to the sound power level (PWL) limits of major noise sources in the Environmental Control and Life Support System (ECLSS) during all mission phases. PIM is simulated using both Statistical Energy Analysis (SEA) and Hybrid Statistical Energy Analysis-Finite Element (SEA-FE) models of the Orion MPCV to obtain the transfer matrix from the PWL of the noise sources to the acoustic energies of the receivers, i.e., the cavities associated with the cabin habitable volume. The goal of the allocation strategy is to control the total energy of cabin habitable volume for maintaining the required SPL limits. Simulations are used to demonstrate that applying the allocated PWLs to the noise sources in the models indeed reproduces the SPL limits in the habitable volume. The effects of Noise Control Treatment (NCT) on allocated noise source PWLs are investigated. The measurement of source PWLs of involved fan and pump development units are also discussed as it is related to some case-specific details of the allocation strategy discussed here.
Human emotions track changes in the acoustic environment
Ma, Weiyi; Thompson, William Forde
2015-01-01
Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin’s hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds. PMID:26553987
NASA Astrophysics Data System (ADS)
Kawai, Keiji; Kojima, Takaya; Hirate, Kotaroh; Yasuoka, Masahito
2004-10-01
In this study, we conducted an experiment to investigate the evaluation structure that lies at the basis of peoples' psychological evaluation of environmental sounds. In the experiment, subjects were given cards on each of which a name of one of the environmental sounds in the specified context is written. Then they did the following three tasks: (1) to sort the cards into groups by the similarity of their impressions of the imagined sounds; (2) to name each group with the word that best represented their overall impression of the group; and (3) to evaluate all sounds on the cards using the words obtained in the previous task. These tasks were done twice: once assuming they heard the sounds at ease inside their homes and once while walking outside in a resort theme park. We analysed the similarity of imagined impression between the sounds with a cluster analysis and clusters of sounds were produced, namely, sounds labelled "natural," "transportation," and so on. A principal component analysis revealed the three major factors of the evaluation structure for both contexts and they were interpreted as preference, activity and sense of daily life.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Tolstoy, M.; Carton, H. D.
2013-12-01
In the summer of 2012, two multi-channel seismic (MCS) experiments, Cascadia Open-Access Seismic Transects (COAST) and Ridge2Trench, were conducted in the offshore Cascadia region. An area of growing environmental concern with active source seismic experiments is the potential impact of the received sound on marine mammals, but data relating to this issue is limited. For these surveys sound level 'mitigation radii' are established for the protection of marine mammals, based on direct arrival modeling and previous calibration experiments. Propagation of sound from seismic arrays can be accurately modeled in deep-water environments, but in shallow and sloped environments the complexity of local geology and bathymetry can make it difficult to predict sound levels as a function of distance from the source array. One potential solution to this problem is to measure the received levels in real-time using the ship's streamer (Diebold et al., 2010), which would allow the dynamic determination of suitable mitigation radii. We analyzed R/V Langseth streamer data collected on the shelf and slope off the Washington coast during the COAST experiment to measure received levels in situ up to 8 km away from the ship. Our analysis shows that water depth and bathymetric features can affect received levels in shallow water environments. The establishment of dynamic mitigation radii based on local conditions may help maximize the safety of marine mammals while also maximizing the ability of scientists to conduct seismic research. With increasing scientific and societal focus on subduction zone environments, a better understanding of shallow water sound propagation is essential for allowing seismic exploration of these hazardous environments to continue. Diebold, J. M., M. Tolstoy, L. Doermann, S. Nooner, S. Webb, and T. J. Crone (2010) R/V Marcus G. Langseth Seismic Source: Modeling and Calibration. Geochemistry, Geophysics, Geosystems, 11, Q12012, doi:10.1029/2010GC003216.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)
NASA Astrophysics Data System (ADS)
Rollo, Audrey K.; Higgs, Dennis M.
2005-04-01
A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.
NASA Astrophysics Data System (ADS)
Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.
2016-08-01
Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.
Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.
2012-01-01
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao
2017-10-01
A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.
Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian
2016-01-01
Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)
Lakes-Harlan, Reinhard; Scherberich, Jan
2015-01-01
A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574
Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).
Lakes-Harlan, Reinhard; Scherberich, Jan
2015-06-01
A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.
NASA Astrophysics Data System (ADS)
Zuo, Zhifeng; Maekawa, Hiroshi
2014-02-01
The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.
Bertucci, Frédéric; Parmentier, Eric; Berten, Laëtitia; Brooker, Rohan M; Lecchini, David
2015-01-01
As environmental sounds are used by larval fish and crustaceans to locate and orientate towards habitat during settlement, variations in the acoustic signature produced by habitats could provide valuable information about habitat quality, helping larvae to differentiate between potential settlement sites. However, very little is known about how acoustic signatures differ between proximate habitats. This study described within- and between-site differences in the sound spectra of five contiguous habitats at Moorea Island, French Polynesia: the inner reef crest, the barrier reef, the fringing reef, a pass and a coastal mangrove forest. Habitats with coral (inner, barrier and fringing reefs) were characterized by a similar sound spectrum with average intensities ranging from 70 to 78 dB re 1 μPa.Hz(-1). The mangrove forest had a lower sound intensity of 70 dB re 1 μPa.Hz(-1) while the pass was characterized by a higher sound level with an average intensity of 91 dB re 1 μPa.Hz(-1). Habitats showed significantly different intensities for most frequencies, and a decreasing intensity gradient was observed from the reef to the shore. While habitats close to the shore showed no significant diel variation in sound intensities, sound levels increased at the pass during the night and barrier reef during the day. These two habitats also appeared to be louder in the North than in the West. These findings suggest that daily variations in sound intensity and across-reef sound gradients could be a valuable source of information for settling larvae. They also provide further evidence that closely related habitats, separated by less than 1 km, can differ significantly in their spectral composition and that these signatures might be typical and conserved along the coast of Moorea.
The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes
ERIC Educational Resources Information Center
Gygi, Brian; Shafiro, Valeriy
2011-01-01
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five…
ERIC Educational Resources Information Center
Keller, Peter; Stevens, Catherine
2004-01-01
This article addresses the learnability of auditory icons, that is, environmental sounds that refer either directly or indirectly to meaningful events. Direct relations use the sound made by the target event whereas indirect relations substitute a surrogate for the target. Across 3 experiments, different indirect relations (ecological, in which…
ERIC Educational Resources Information Center
Gygi, Brian; Shafiro, Valeriy
2013-01-01
Purpose: Previously, Gygi and Shafiro (2011) found that when environmental sounds are semantically incongruent with the background scene (e.g., horse galloping in a restaurant), they can be identified more accurately by young normal-hearing listeners (YNH) than sounds congruent with the scene (e.g., horse galloping at a racetrack). This study…
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
Localization of sound sources in a room with one microphone
NASA Astrophysics Data System (ADS)
Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre
2017-08-01
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
Environmentally sound manufacturing
NASA Technical Reports Server (NTRS)
Caddy, Larry A.; Bowman, Ross; Richards, Rex A.
1994-01-01
The NASA/Thiokol/industry team has developed and started implementation of an environmentally sound manufacturing plan for the continued production of solid rocket motors. They have worked with other industry representatives and the U.S. Environmental Protection Agency to prepare a comprehensive plan to eliminate all ozone depleting chemicals from manufacturing processes and to reduce the use of other hazardous materials used to produce the space shuttle reusable solid rocket motors. The team used a classical approach for problem solving combined with a creative synthesis of new approaches to attack this problem. As our ability to gather data on the state of the Earth's environmental health increases, environmentally sound manufacturing must become an integral part of the business decision making process.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Converting a Monopole Emission into a Dipole Using a Subwavelength Structure
NASA Astrophysics Data System (ADS)
Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Cheng, Jian-chun; Zhang, Likun
2018-03-01
High-efficiency emission of multipoles is unachievable by a source much smaller than the wavelength, preventing compact acoustic devices for generating directional sound beams. Here, we present a primary scheme towards solving this problem by numerically and experimentally enclosing a monopole sound source in a structure with a dimension of around 1 /10 sound wavelength to emit a dipolar field. The radiated sound power is found to be more than twice that of a bare dipole. Our study of efficient emission of directional low-frequency sound from a monopole source in a subwavelength space may have applications such as focused ultrasound for imaging, directional underwater sound beams, miniaturized sonar, etc.
NASA Astrophysics Data System (ADS)
Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.
2012-07-01
Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
The effects of auditive and visual settings on perceived restoration likelihood
Jahncke, Helena; Eriksson, Karolina; Naula, Sanna
2015-01-01
Research has so far paid little attention to how environmental sounds might affect restorative processes. The aim of the present study was to investigate the effects of auditive and visual stimuli on perceived restoration likelihood and attitudes towards varying environmental resting conditions. Assuming a condition of cognitive fatigue, all participants (N = 40) were presented with images of an open plan office and urban nature, each under four sound conditions (nature sound, quiet, broadband noise, office noise). After the presentation of each setting/sound combination, the participants assessed it according to restorative qualities, restoration likelihood and attitude. The results mainly showed predicted effects of the sound manipulations on the perceived restorative qualities of the settings. Further, significant interactions between auditive and visual stimuli were found for all measures. Both nature sounds and quiet more positively influenced evaluations of the nature setting compared to the office setting. When office noise was present, both settings received poor evaluations. The results agree with expectations that nature sounds and quiet areas support restoration, while office noise and broadband noise (e.g. ventilation, traffic noise) do not. The findings illustrate the significance of environmental sound for restorative experience. PMID:25599752
Techniques and instrumentation for the measurement of transient sound energy flux
NASA Astrophysics Data System (ADS)
Watkinson, P. S.; Fahy, F. J.
1983-12-01
The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.
Perceptual constancy in auditory perception of distance to railway tracks.
De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L
2013-07-01
Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.
Recent paleoseismicity record in Prince William Sound, Alaska, USA
NASA Astrophysics Data System (ADS)
Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.
2017-12-01
Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.
Repeated imitation makes human vocalizations more word-like.
Edmiston, Pierce; Perlman, Marcus; Lupyan, Gary
2018-03-14
People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Contaminant distribution and accumulation in the surface sediments of Long Island Sound
Mecray, E.L.; Buchholtz ten Brink, Marilyn R.
2000-01-01
The distribution of contaminants in surface sediments has been measured and mapped as part of a U.S. Geological Survey study of the sediment quality and dynamics of Long Island Sound. Surface samples from 219 stations were analyzed for trace (Ag, Ba, Cd, Cr, Cu, Hg, Ni, Pb, V, Zn and Zr) and major (Al, Fe, Mn, Ca, and Ti) elements, grain size, and Clostridium perfringens spores. Principal Components Analysis was used to identify metals that may covary as a function of common sources or geochemistry. The metallic elements generally have higher concentrations in fine-grained deposits, and their transport and depositional patterns mimic those of small particles. Fine-grained particles are remobilized and transported from areas of high bottom energy and deposited in less dynamic regions of the Sound. Metal concentrations in bottom sediments are high in the western part of the Sound and low in the bottom-scoured regions of the eastern Sound. The sediment chemistry was compared to model results (Signell et al., 1998) and maps of sedimentary environments (Knebel et al., 1999) to better understand the processes responsible for contaminant distribution across the Sound. Metal concentrations were normalized to grain-size and the resulting ratios are uniform in the depositional basins of the Sound and show residual signals in the eastern end as well as in some local areas. The preferential transport of fine-grained material from regions of high bottom stress is probably the dominant factor controlling the metal concentrations in different regions of Long Island Sound. This physical redistribution has implications for environmental management in the region.
Series expansions of rotating two and three dimensional sound fields.
Poletti, M A
2010-12-01
The cylindrical and spherical harmonic expansions of oscillating sound fields rotating at a constant rate are derived. These expansions are a generalized form of the stationary sound field expansions. The derivations are based on the representation of interior and exterior sound fields using the simple source approach and determination of the simple source solutions with uniform rotation. Numerical simulations of rotating sound fields are presented to verify the theory.
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
A model for the perception of environmental sound based on notice-events.
De Coensel, Bert; Botteldooren, Dick; De Muer, Tom; Berglund, Birgitta; Nilsson, Mats E; Lercher, Peter
2009-08-01
An approach is proposed to shed light on the mechanisms underlying human perception of environmental sound that intrudes in everyday living. Most research on exposure-effect relationships aims at relating overall effects to overall exposure indicators in an epidemiological fashion, without including available knowledge on the possible underlying mechanisms. Here, it is proposed to start from available knowledge on audition and perception to construct a computational framework for the effect of environmental sound on individuals. Obviously, at the individual level additional mechanisms (inter-sensory, attentional, cognitive, emotional) play a role in the perception of environmental sound. As a first step, current knowledge is made explicit by building a model mimicking some aspects of human auditory perception. This model is grounded in the hypothesis that long-term perception of environmental sound is determined primarily by short notice-events. The applicability of the notice-event model is illustrated by simulating a synthetic population exposed to typical Flemish environmental noise. From these simulation results, it is demonstrated that the notice-event model is able to mimic the differences between the annoyance caused by road traffic noise exposure and railway traffic noise exposure that are also observed empirically in other studies and thus could provide an explanation for these differences.
Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie
2017-01-01
Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065
Measures against mechanical noise from large wind turbines: A design guide
NASA Astrophysics Data System (ADS)
Ljunggren, Sten; Johansson, Melker
1991-06-01
The noise generated by the machinery of the two Swedish prototypes contains pure tones which are very important with respect to the environmental impact. A discussion of the results of noise measurements carried out at these turbines, that are meant to be used as a guide as to how to predict and control the noise around a large wind turbine during the design stage, is presented. The design targets are discussed, stressing the importance of the audibility of pure tones and not only the annoyance; a simple criterion is cited. The main noise source is the gearbox and a simple empirical expression for the sound power level is shown to give good agreement with the measurement results. The influence of the noise of the gearbox design is discussed in some detail. Formulas for the prediction of the airborne sound transmission to the ground outside the nacelle are presented, together with a number of empirical data on the sound reduction indices for single and double constructions. The structure-borne noise transmission is discussed.
Understanding the intentional acoustic behavior of humpback whales: a production-based approach.
Cazau, Dorian; Adam, Olivier; Laitman, Jeffrey T; Reidenberg, Joy S
2013-09-01
Following a production-based approach, this paper deals with the acoustic behavior of humpback whales. This approach investigates various physical factors, which are either internal (e.g., physiological mechanisms) or external (e.g., environmental constraints) to the respiratory tractus of the whale, for their implications in sound production. This paper aims to describe a functional scenario of this tractus for the generation of vocal sounds. To do so, a division of this tractus into three different configurations is proposed, based on the air recirculation process which determines air sources and laryngeal valves. Then, assuming a vocal function (in sound generation or modification) for several specific anatomical components, an acoustic characterization of each of these configurations is proposed to link different spectral features, namely, fundamental frequencies and formant structures, to specific vocal production mechanisms. A discussion around the question of whether the whale is able to fully exploit the acoustic potential of its respiratory tractus is eventually provided.
Sound reduction by metamaterial-based acoustic enclosure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Shanshan; Li, Pei; Zhou, Xiaoming
In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of themore » source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.« less
Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L
2015-01-01
A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.
Challenges Facing 3-D Audio Display Design for Multimedia
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1998-01-01
The challenges facing successful multimedia presentation depend largely on the expectations of the designer and end user for a given application. Perceptual limitations in distance, elevation and azimuth sound source simulation differ significantly between headphone and cross-talk cancellation loudspeaker listening and therefore must be considered. Simulation of an environmental context is desirable but the quality depends on processing resources and lack of interaction with the host acoustical environment. While techniques such as data reduction of head-related transfer functions have been used widely to improve simulation fidelity, another approach involves determining thresholds for environmental acoustic events. Psychoacoustic studies relevant to this approach are reviewed in consideration of multimedia applications
Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study.
Cambi, Jacopo; Livi, Ludovica; Livi, Walter
2017-05-01
Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness ( P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation ( P <0.0001). Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.
Understanding environmental sounds in sentence context.
Uddin, Sophia; Heald, Shannon L M; Van Hedger, Stephen C; Klos, Serena; Nusbaum, Howard C
2018-03-01
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
Mercadíe, Lolita; Mick, Gérard; Guétin, Stéphane; Bigand, Emmanuel
2015-10-01
In fibromyalgia, pain symptoms such as hyperalgesia and allodynia are associated with fatigue. Mechanisms underlying such symptoms can be modulated by listening to pleasant music. We expected that listening to music, because of its emotional impact, would have a greater modulating effect on the perception of pain and fatigue in patients with fibromyalgia than listening to nonmusical sounds. To investigate this hypothesis, we carried out a 4-week study in which patients with fibromyalgia listened to either preselected musical pieces or environmental sounds when they experienced pain in active (while carrying out a physical activity) or passive (at rest) situations. Concomitant changes of pain and fatigue levels were evaluated. When patients listened to music or environmental sounds at rest, pain and fatigue levels were significantly reduced after 20 minutes of listening, with no difference of effect magnitude between the two stimuli. This improvement persisted 10 minutes after the end of the listening session. In active situations, pain did not increase in presence of the two stimuli. Contrary to our expectations, music and environmental sounds produced a similar relieving effect on pain and fatigue, with no benefit gained by listening to pleasant music over environmental sounds. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Bakker, R H; Pedersen, E; van den Berg, G P; Stewart, R E; Lok, W; Bouma, J
2012-05-15
The present government in the Netherlands intends to realize a substantial growth of wind energy before 2020, both onshore and offshore. Wind turbines, when positioned in the neighborhood of residents may cause visual annoyance and noise annoyance. Studies on other environmental sound sources, such as railway, road traffic, industry and aircraft noise show that (long-term) exposure to sound can have negative effects other than annoyance from noise. This study aims to elucidate the relation between exposure to the sound of wind turbines and annoyance, self-reported sleep disturbance and psychological distress of people that live in their vicinity. Data were gathered by questionnaire that was sent by mail to a representative sample of residents of the Netherlands living in the vicinity of wind turbines A dose-response relationship was found between immission levels of wind turbine sound and selfreported noise annoyance. Sound exposure was also related to sleep disturbance and psychological distress among those who reported that they could hear the sound, however not directly but with noise annoyance acting as a mediator. Respondents living in areas with other background sounds were less affected than respondents in quiet areas. People living in the vicinity of wind turbines are at risk of being annoyed by the noise, an adverse effect in itself. Noise annoyance in turn could lead to sleep disturbance and psychological distress. No direct effects of wind turbine noise on sleep disturbance or psychological stress has been demonstrated, which means that residents, who do not hear the sound, or do not feel disturbed, are not adversely affected. Copyright © 2012 Elsevier B.V. All rights reserved.
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.
Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T
2013-02-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.
Environmentally Sound Small-Scale Energy Projects. Guidelines for Planning.
ERIC Educational Resources Information Center
Bassan, Elizabeth Ann; Wood, Timothy S., Ed.
This manual is the fourth volume in a series of publications that provide information for the planning of environmentally sound small-scale projects. Programs that aim to protect the renewable natural resources that supply most of the energy used in developing nations are suggested. Considerations are made for physical environmental factors as…
Harris, Debra D
2015-01-01
Three flooring materials, terrazzo, rubber, and carpet tile, in patient unit corridors were compared for absorption of sound, comfort, light reflectance, employee perceptions and preferences, and patient satisfaction. Environmental stressors, such as noise and ergonomic factors, effect healthcare workers and patients, contributing to increased fatigue, anxiety and stress, decreased productivity, and patient safety and satisfaction. A longitudinal comparative cohort study comparing three types of flooring assessed sound levels, healthcare worker responses, and patient Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) ratings over 42 weeks. A linear mixed model analysis was conducted to determine significant differences between the means for participant responses and objective sound meter data during all three phases of the study. A significant difference was found for sound levels between flooring type for equivalent continuous sound levels. Carpet tile performed better for sound attenuation by absorption, reducing sound levels 3.14 dBA. Preferences for flooring materials changed over the course of the study. The HCAHPS ratings aligned with the sound meter data showing that patients perceived the noise levels to be lower with carpet tiles, improving patient satisfaction ratings. Perceptions for healthcare staff and patients were aligned with the sound meter data. Carpet tile provides sound absorption that affects sound levels and influences occupant's perceptions of environmental factors that contribute to the quality of the indoor environment. Flooring that provides comfort underfoot, easy cleanability, and sound absorption influence healthcare worker job satisfaction and patient satisfaction with their patient experience. © The Author(s) 2015.
Change deafness for real spatialized environmental scenes.
Gaston, Jeremy; Dickerson, Kelly; Hipp, Daniel; Gerhardstein, Peter
2017-01-01
The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds. Results using signal detection theory and accuracy analyses indicated that, under most conditions, errors were significantly reduced for spatially distributed relative to non-spatial scenes. A second goal of the present study was to evaluate a possible link between memory for scene contents and change discrimination. Memory was evaluated by presenting a cued recall test following each trial of the discrimination task. Results using signal detection theory and accuracy analyses indicated that recall ability was similar in terms of accuracy, but there were reductions in sensitivity compared to previous reports. Finally, the present study used a large and representative sample of outdoor, urban, and environmental sounds, presented in unique combinations of nearly 1000 trials per participant. This enabled the exploration of the relationship between change perception and the perceptual similarity between change targets and background scene sounds. These (post hoc) analyses suggest both a categorical and a stimulus-level relationship between scene similarity and the magnitude of change errors.
Understanding Emerging Impacts and Requirements Related to Utility-Scale Solar Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartmann, Heidi M.; Grippo, Mark A.; Heath, Garvin A.
2016-09-01
Utility-scale solar energy plays an important role in the nation’s strategy to address climate change threats through increased deployment of renewable energy technologies, and both the federal government and individual states have established specific goals for increased solar energy development. In order to achieve these goals, much attention is paid to making utility-scale solar energy cost-competitive with other conventional energy sources, while concurrently conducting solar development in an environmentally sound manner.
Laboratory and Field Studies of the Acoustics of Multiphase Ocean Bottom Materials
2011-09-30
data from this measurement campaign is shown in Figs. 1, 2, 3 and 4. The statistical nature of this data will be assesed and comparison to models... environmental regulations limiting sound levels in the water, even for this source, which is intended to replace SUS. We conducted an engineering...Finally, a full listing of all grant-related activities is shown in the Fiscal Year Publications section below. IMPACT /APPLICATIONS The Biot-based
A geospatial model of ambient sound pressure levels in the contiguous United States.
Mennitt, Daniel; Sherrill, Kirk; Fristrup, Kurt
2014-05-01
This paper presents a model that predicts measured sound pressure levels using geospatial features such as topography, climate, hydrology, and anthropogenic activity. The model utilizes random forest, a tree-based machine learning algorithm, which does not incorporate a priori knowledge of source characteristics or propagation mechanics. The response data encompasses 270 000 h of acoustical measurements from 190 sites located in National Parks across the contiguous United States. The explanatory variables were derived from national geospatial data layers and cross validation procedures were used to evaluate model performance and identify variables with predictive power. Using the model, the effects of individual explanatory variables on sound pressure level were isolated and quantified to reveal systematic trends across environmental gradients. Model performance varies by the acoustical metric of interest; the seasonal L50 can be predicted with a median absolute deviation of approximately 3 dB. The primary application for this model is to generalize point measurements to maps expressing spatial variation in ambient sound levels. An example of this mapping capability is presented for Zion National Park and Cedar Breaks National Monument in southwestern Utah.
Underwater auditory localization by a swimming harbor seal (Phoca vitulina).
Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido
2006-09-01
The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.
Personal sound zone reproduction with room reflections
NASA Astrophysics Data System (ADS)
Olik, Marek
Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.
Marine mammal audibility of selected shallow-water survey sources.
MacGillivray, Alexander O; Racca, Roberto; Li, Zizheng
2014-01-01
Most attention about the acoustic effects of marine survey sound sources on marine mammals has focused on airgun arrays, with other common sources receiving less scrutiny. Sound levels above hearing threshold (sensation levels) were modeled for six marine mammal species and seven different survey sources in shallow water. The model indicated that odontocetes were most likely to hear sounds from mid-frequency sources (fishery, communication, and hydrographic systems), mysticetes from low-frequency sources (sub-bottom profiler and airguns), and pinnipeds from both mid- and low-frequency sources. High-frequency sources (side-scan and multibeam) generated the lowest estimated sensation levels for all marine mammal species groups.
Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.
Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael
2014-04-01
The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.
Environmental Awareness and Public Support for Protecting and Restoring Puget Sound
NASA Astrophysics Data System (ADS)
Safford, Thomas G.; Norman, Karma C.; Henly, Megan; Mills, Katherine E.; Levin, Phillip S.
2014-04-01
In an effort to garner consensus around environmental programs, practitioners have attempted to increase awareness about environmental threats and demonstrate the need for action. Nonetheless, how beliefs about the scope and severity of different types of environmental concerns shape support for management interventions are less clear. Using data from a telephone survey of residents of the Puget Sound region of Washington, we investigate how perceptions of the severity of different coastal environmental problems, along with other social factors, affect attitudes about policy options. We find that self-assessed environmental understanding and views about the seriousness of pollution, habitat loss, and salmon declines are only weakly related. Among survey respondents, women, young people, and those who believe pollution threatens Puget Sound are more likely to support policy measures such as increased enforcement and spending on restoration. Conversely, self-identified Republicans and individuals who view current regulations as ineffective tend to oppose governmental actions aimed at protecting and restoring Puget Sound. Support for one policy measure—tax credits for environmentally-friendly business practices—is not significantly affected by political party affiliation. These findings demonstrate that environmental awareness can influence public support for environmental policy tools. However, the nature of particular management interventions and other social forces can have important mitigating effects and need to be considered by practitioners attempting to develop environment-related social indicators and generate consensus around the need for action to address environmental problems.
NASA Technical Reports Server (NTRS)
1971-01-01
This document is a draft of an environmental impact statement, evaluating the effect on the environment of the use of sounding rockets, balloons and air borne research programs in studying the atmosphere.
NESSTI: Norms for Environmental Sound Stimuli
Hocking, Julia; Dzafic, Ilvana; Kazovsky, Maria; Copland, David A.
2013-01-01
In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies. PMID:24023866
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-17
... maintenance and storage needs of an expanded fleet of light rail vehicles identified in the Sound Transit 2... Environmental Impact Statement for a Light Rail Operations and Maintenance Satellite Facility, King and... planning to prepare an Environmental Impact Statement (EIS) for Sound Transit's proposed new Light Rail...
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-03-22
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-01-01
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187
Salomons, Erik M; Janssen, Sabine A
2011-06-01
In environmental noise control one commonly employs the A-weighted sound level as an approximate measure of the effect of noise on people. A measure that is more closely related to direct human perception of noise is the loudness level. At constant A-weighted sound level, the loudness level of a noise signal varies considerably with the shape of the frequency spectrum of the noise signal. In particular the bandwidth of the spectrum has a large effect on the loudness level, due to the effect of critical bands in the human hearing system. The low-frequency content of the spectrum also has an effect on the loudness level. In this note the relation between loudness level and A-weighted sound level is analyzed for various environmental noise spectra, including spectra of traffic noise, aircraft noise, and industrial noise. From loudness levels calculated for these environmental noise spectra, diagrams are constructed that show the relation between loudness level, A-weighted sound level, and shape of the spectrum. The diagrams show that the upper limits of the loudness level for broadband environmental noise spectra are about 20 to 40 phon higher than the lower limits for narrowband spectra, which correspond to the loudness levels of pure tones. The diagrams are useful for assessing limitations and potential improvements of environmental noise control methods and policy based on A-weighted sound levels.
Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers
Cambi, Jacopo; Livi, Ludovica; Livi, Walter
2017-01-01
Objectives Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. Methods This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. Results The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). Conclusion Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions. PMID:28690888
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl
Baxter, Caitlin S.; Takahashi, Terry T.
2013-01-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801
Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D
2014-07-01
Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.
Structure of supersonic jet flow and its radiated sound
NASA Technical Reports Server (NTRS)
Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.
1994-01-01
The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, SShao-sheng R.; Allen, Christopher S.
2009-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.
Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S.; Cho, Chang Hyun
2018-01-01
Background and Objectives It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Subjects and Methods Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. Results CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. Conclusions This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing. PMID:29325391
Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S; Cho, Chang Hyun
2017-12-01
It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing.
2007-01-01
deposition directly to Puget Sound was an important source of PAHs, polybrominated diphenyl ethers (PBDEs), and heavy metals . In most cases, atmospheric...versus Atmospheric Fluxes ........................................................................66 PAH Source Apportionment ...temperature inversions) on air quality during the wet season. A semi-quantitative apportionment study permitted a first-order characterization of source
Binaural Processing of Multiple Sound Sources
2016-08-18
Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman
Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
2009-12-01
The objective of this study was to assess whether training on speech processed with an eight-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of nonspeech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Twenty-four normal-hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional 24 subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and post-test sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Although both groups of subjects showed significant pre- to post-test improvements, subjects who transcribed vocoded sentences during training performed significantly better at post-test than those in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pre-test speech performance and, to a higher degree, post-test speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to use the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well ( approximately 75% correct) on the gender-identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall ( approximately 55%), suggesting that either explicit training is required to discriminate talkers' voices reliably or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that although transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone.
Loebach, Jeremy L.; Pisoni, David B.; Svirsky, Mario A.
2009-01-01
Objective The objective of this study was to assess whether training on speech processed with an 8-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of non-speech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Design Twenty-four normal hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional twenty-four subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and posttest sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed-set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Results Although both groups of subjects showed significant pre- to posttest improvements, subjects who transcribed vocoded sentences during training performed significantly better at posttest than subjects in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pretest speech performance, and to a higher degree posttest speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Conclusions Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to utilize the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well (~75% correct) on the gender identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall (~55%), suggesting that either explicit training is required to reliably discriminate talkers’ voices, or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that while transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone. PMID:19773659
Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.
Tollin, Daniel J; Yin, Tom C T
2003-10-01
The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.
Acoustic signatures of sound source-tract coupling.
Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B
2011-04-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society
Acoustic signatures of sound source-tract coupling
Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.
2014-01-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang; Sackmann, Brandon S.; Long, Wen
2012-10-01
The Salish Sea, including Puget Sound, is a large estuarine system bounded by over seven thousand miles of complex shorelines, consists of several subbasins and many large inlets with distinct properties of their own. Pacific Ocean water enters Puget Sound through the Strait of Juan de Fuca at depth over the Admiralty Inlet sill. Ocean water mixed with freshwater discharges from runoff, rivers, and wastewater outfalls exits Puget Sound through the brackish surface outflow layer. Nutrient pollution is considered one of the largest threats to Puget Sound. There is considerable interest in understanding the effect of nutrient loads on themore » water quality and ecological health of Puget Sound in particular and the Salish Sea as a whole. The Washington State Department of Ecology (Ecology) contracted with Pacific Northwest National Laboratory (PNNL) to develop a coupled hydrodynamic and water quality model. The water quality model simulates algae growth, dissolved oxygen, (DO) and nutrient dynamics in Puget Sound to inform potential Puget Sound-wide nutrient management strategies. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or control human impacts to DO levels in the sensitive areas. The project did not include any additional data collection but instead relied on currently available information. This report describes model development effort conducted during the period 2009 to 2012 under a U.S. Environmental Protection Agency (EPA) cooperative agreement with PNNL, Ecology, and the University of Washington awarded under the National Estuary Program« less
Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra
2016-03-01
Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.
Effects of Soundscape on the Environmental Restoration in Urban Natural Environments.
Zhang, Yuan; Kang, Jian; Kang, Joe
2017-01-01
According to the attention restoration theory, directed attention is a limited physiological resource and is susceptible to fatigue by overuse. Natural environments are a healthy resource, which allows and promotes the restoration of individuals within it from their state of directed attention fatigue. This process is called the environmental restoration on individuals, and it is affected both positively and negatively by environmental factors. By considering the relationship among the three components of soundscape, that is, people, sound and the environment, this study aims to explore the effects of soundscape on the environmental restoration in urban natural environments. A field experiment was conducted with 70 participants (four groups) in an urban natural environment (Shenyang, China). Directed attention was first depleted with a 50-min 'consumption' phase, followed by a baseline measurement of attention level. Three groups then engaged in 40 min of restoration in the respective environments with similar visual surroundings but with different sounds present, after which attention levels were re-tested. The fourth group did not undergo restoration and was immediately re-tested. The difference between the two test scores, corrected for the practice effect, represents the attention restoration of individuals exposed to the respective environments. An analysis of variance was performed, demonstrating that the differences between the mean values for each group were statistically significant [sig. = 0.027 (<0.050)]. The results showed that the mean values (confidence interval of 95%) of each group are as follows: 'natural sounds group' (8.4), 'traffic sounds group' (2.4) and 'machine sounds group' (-1.8). It can be concluded that (1) urban natural environments, with natural sounds, have a positive effect on the restoration of an individuals' attention and (2) the presence of different types of sounds has significantly divergent effects on the environmental restoration.
Anatomical Correlates of Non-Verbal Perception in Dementia Patients
Lin, Pin-Hsuan; Chen, Hsiu-Hui; Chen, Nai-Ching; Chang, Wen-Neng; Huang, Chi-Wei; Chang, Ya-Ting; Hsu, Shih-Wei; Hsu, Che-Wei; Chang, Chiung-Chih
2016-01-01
Purpose: Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. Methods: To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer’s dementia (AD), 15 with behavior variant fronto-temporal dementia (bv-FTD), 14 with semantic dementia (SD) were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM) was used to compare and correlated the volumetric measures with task scores. Results: The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Conclusions: Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds are mediated by distinct neural circuits. PMID:27630558
Anatomical Correlates of Non-Verbal Perception in Dementia Patients.
Lin, Pin-Hsuan; Chen, Hsiu-Hui; Chen, Nai-Ching; Chang, Wen-Neng; Huang, Chi-Wei; Chang, Ya-Ting; Hsu, Shih-Wei; Hsu, Che-Wei; Chang, Chiung-Chih
2016-01-01
Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer's dementia (AD), 15 with behavior variant fronto-temporal dementia (bv-FTD), 14 with semantic dementia (SD) were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM) was used to compare and correlated the volumetric measures with task scores. The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds are mediated by distinct neural circuits.
2010-09-30
environmental impact than do 5 historic approaches used in Navy environmental assessments (EA) and impact statements (EIS). Many previous methods...of Sound on the Marine Environment (ESME) program contributes to the ultimate goal of creating an environmental assessment tool for activities that...expand the species library available for use in 3MB, 2) continue incorporating the ability to project environmental influences on simulated animal
Environmental Exposure and Design Criteria for Offshore Oil and Gas Structures
1980-05-01
reliability ar_alysis. Because there are no clear lines of demarcation between them, these methods are often used in varying combinations. Sound ...cludes that OCSEA-P not now effe.tively contribute...to the accrual of sound scientific information adequate for OCS management." One reason for such a...procedures for resolving differences need to be developed. Sound and timely assessments of environmental exposure risks will require: 1) adequate levels of
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Intensity-invariant coding in the auditory system.
Barbour, Dennis L
2011-11-01
The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.
2015-09-30
experiment was conducted in Broad Sound of Massachusetts Bay using the AUV Unicorn, a 147dB omnidirectional Lubell source, and an open-ended steel pipe... steel pipe target (Figure C) was dropped at an approximate local coordinate position of (x,y)=(170,155). The location was estimated using ship...position when the target was dropped, but was only accurate within 10-15m. The orientation of the target was unknown. Figure C: Open-ended steel
1982-12-01
Coppens showed great kindness by accepting supervision of this research when time was short. Vis con - cern, understanding and direzticn led to an...related to computer processing time and storage requirements. These factors will not he addressed directly in this resear:h because the pro - cessing...computational efficiency. Disadvantages are a uniform mesh and periodic boundary con - ditions to satisfy the FFT, and filtering of tho sound speed profile by
1995-05-01
at zero source level to allow time for any mobile marine animal who was annoyed by the sound to depart the affected area; and project facilities would...using conventional thermometers); autonomous polar hydrophones; and a dual site experiment using mobile playback experiments. Of the twelve alternatives...HYDROPHONES (ICE NOISE 2-43 MEASUREMENTS) (ALTERNATIVE 11) 2.2.12 DUAL SITE EXPERIMENT; ALTERNATIVE MMRP 2-44 TECHNIQUES -- MOBILE PLAYBACK EXPERIMENTS
A social survey on the noise impact in open-plan working environments in China.
Zhang, Mei; Kang, Jian; Jiao, Fenglei
2012-11-01
The aim of this study is to reveal noise impact in open-plan working environments in China, through a series of questionnaire surveys and acoustic measurements in typical open-plan working environments. It has been found that compared to other physical environmental factors in open-plan working environments, people are much less satisfied with the acoustic environment. The noise impact in the surveyed working environments is rather significant, in terms of sound level inside the office, understanding of colleagues' conversation, and the use of background music such as music players. About 30-50% of the interviewees think that various noise sources inside and outside offices are 'very disturbing' and 'disturbing', and the most annoying sounds include noises from outside, ventilation systems, office equipment, and keyboard typing. Using higher panels to separate work space, or working in enclosed offices, are regarded as effective improvement measures, whereas introducing natural sounds to mask unwanted sounds seems to be not preferable. There are significant correlations between the evaluation of acoustic environment and office symptoms, including hypersensitivity to loud sounds, easily getting tired and depression. There are also significant correlations between evaluation of various acoustics-related factors and certain statements relating to job satisfaction, including sensitivity to noise, as well as whether conversations could be heard by colleagues. Copyright © 2012 Elsevier B.V. All rights reserved.
Orban, David A; Soltis, Joseph; Perkins, Lori; Mellen, Jill D
2017-05-01
A clear need for evidence-based animal management in zoos and aquariums has been expressed by industry leaders. Here, we show how individual animal welfare monitoring can be combined with measurement of environmental conditions to inform science-based animal management decisions. Over the last several years, Disney's Animal Kingdom® has been undergoing significant construction and exhibit renovation, warranting institution-wide animal welfare monitoring. Animal care and science staff developed a model that tracked animal keepers' daily assessments of an animal's physical health, behavior, and responses to husbandry activity; these data were matched to different external stimuli and environmental conditions, including sound levels. A case study of a female giant anteater and her environment is presented to illustrate how this process worked. Associated with this case, several sound-reducing barriers were tested for efficacy in mitigating sound. Integrating daily animal welfare assessment with environmental monitoring can lead to a better understanding of animals and their sensory environment and positively impact animal welfare. © 2017 Wiley Periodicals, Inc.
Numerical Models for Sound Propagation in Long Spaces
NASA Astrophysics Data System (ADS)
Lai, Chenly Yuen Cheung
Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.
Sound source tracking device for telematic spatial sound field reproduction
NASA Astrophysics Data System (ADS)
Cardenas, Bruno
This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.
A home automation based environmental sound alert for people experiencing hearing loss.
Mielke, Matthias; Bruck, Rainer
2016-08-01
Different assistive technologies are available for deaf people (i.e. deaf, deafened, and hard of hearing). Besides the well-known hearing aid, devices for detection of sound events that occur at home or at work (e.g. doorbell, telephone) are available. Despite the technological progress in the last years and resulting new possibilities, the basic functions and concepts of such devices have not changed. The user still needs special assistive technology that is bound to the home or work environment. In this contribution a new concept for awareness of events in buildings is presented. In contrast to state-of-the-art assistive devices, it makes use of modern Information and Communication and home automation technology, and thus offers the prospect of cheap implementation and higher comfort for the user. In this concept events are indicated by notifications that are send over a Bluetooth Low Energy mesh network from a source to the user. The notifications are received by the user's smartwatch and the event is indicated by vibration and an icon representing its source.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
SoundProof: A Smartphone Platform for Wireless Monitoring of Wildlife and Environment
NASA Astrophysics Data System (ADS)
Lukac, M.; Monibi, M.; Lane, M. L.; Howell, L.; Ramanathan, N.; Borker, A.; McKown, M.; Croll, D.; Terschy, B.
2011-12-01
We are developing an open-source, low-cost wildlife and environmental monitoring solution based on Android smartphones. Using a smartphone instead of a traditional microcontroller or single board computer has several advantages: smartphones are single integrated devices with multiple radios and a battery; they have a robust software interface which enables customization; and are field-tested by millions of users daily. Consequently, smartphones can improve the cost, configurability, and real-time access to data for environmental monitoring, ultimately replacing existing monitoring solutions which are proprietary, difficult to customize, expensive, and require labor-intensive maintenance. While smartphones can radically change environmental and wildlife monitoring, there are a number of technical challenges to address. We present our smartphone-based platform, SoundProof, discuss the challenges of building an autonomous system based on Android phones, and our ongoing efforts to enable environmental monitoring. Our system is built using robust off-the-shelf hardware and mature open-source software where available, to increase scalability and ease of installation. Key features include: * High-quality acoustic signal collection from external microphones to monitor wildlife populations. * Real-time data access, remote programming, and configuration of the field sensor via wireless cellular or WiFi channels, accessible from a website. * Waterproof packaging and solar charger setup for long-term field deployments. * Rich instrumentation of the end-to-end system to quickly identify and debug problems. * Supplementary mesh networking system with long-range wireless antennae to provide coverage when no cell network is available. We have deployed this system to monitor Rufous Crowned Sparrows on Anacapa Island, Chinese Crested Turns on the Matsu Islands in Taiwan, and Ashy Storm Petrels on South East Farallon Island. We have testbeds at two UC Natural Reserves to field-test new or exploratory features before deployment. Side-by-side validation data collected in the field using SoundProof and state-of-the-art wildlife monitoring solutions, including the Cornell ARU and Wildlife Acoustic's Songmeter, demonstrate that acoustic signals collected with cellphones provide sufficient data integrity for measuring the success of bird conservation efforts, measuring bird relative abundance and detecting elusive species. We are extending this platform to numerous other areas of environmental monitoring. Recent developments such as the Android Open Accessory, the IOIO Board, MicroBridge, Amarino, and Cellbots enable microcontrollers to talk with Android applications, making it affordable and feasible to extend our platform to operate with the most common sensors.
Evolutionary trends in directional hearing
Carr, Catherine E.; Christensen-Dalsgaard, Jakob
2016-01-01
Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.
Kidd, Gerald
2017-10-17
Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid
2017-01-01
Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603
Relation of sound intensity and accuracy of localization.
Farrimond, T
1989-08-01
Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.
Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.
Gauthier, P-A; Lecomte, P; Berry, A
2017-04-01
Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.
Worldwide Emerging Environmental Issues Affecting the U.S. Military. January 2008 Report
2008-01-01
around the San Juan Islands, the Strait of Juan de Fuca and all of Puget Sound . One of its aspects includes assessing and improving vessel traffic... Sound Orca Recovery Plan Released http://www.physorg.com/news120453628.html Salty shepherds. The Economist, Jan 24th 2008 http://www.economist.com...not want or cannot process in an environmentally sound way. The Revised Green List Regulation 1418/2007 AC/UNU Millennium Project www.millennium
How Lexical is the Lexicon? Evidence for Integrated Auditory Memory Representations
Pufahl, April; Samuel, Arthur G.
2014-01-01
Previous research has shown that lexical representations must include not only linguistic information (what word was said), but also indexical information (how it was said, and by whom). The present work demonstrates that even this expansion is not sufficient. Seemingly irrelevant information, such as an unattended background sound, is retained in memory and can facilitate subsequent speech perception. We presented participants with spoken words paired with environmental sounds (e.g., a phone ringing), and had them make an “animate/inanimate” decision for each word. Later performance identifying filtered versions of the words was impaired to a similar degree if the voice changed or if the environmental sound changed. Moreover, when quite dissimilar words were used at exposure and test, we observed the same result when we reversed the roles of the words and the environmental sounds. The experiments also demonstrated limits to these effects, with no benefit from repetition. Theoretically, our results support two alternative possibilities: 1) Lexical representations are memory representations, and are not walled off from those for other sounds. Indexical effects reflect simply one type of co-occurrence that is incorporated into such representations. 2) The existing literature on indexical effects does not actually bear on lexical representations – voice changes, like environmental sounds heard with a word, produce implicit memory effects that are not tied to the lexicon. We discuss the evidence and implications of these two theoretical alternatives. PMID:24480453
Sound quality indicators for urban places in Paris cross-validated by Milan data.
Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre
2015-10-01
A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.
Preface--Environmental issues related to oil and gas exploration and production
Kharaka, Yousif K.; Otton, James K.
2007-01-01
Energy is the essential commodity that powers the expanding global economy. Starting in the 1950s, oil and natural gas became the main sources of primary energy for the rapidly increasing world population (Edwards, 1997). In 2003, petroleum was the source for 62.1% of global energy, and projections by energy information administration (EIA) indicate that oil and gas will continue their dominance, supplying 59.5% of global energy in 2030 (EIA, 2007). Unfortunately petroleum and coal consumption carry major detrimental environmental impacts that may be regional or global in scale, including air pollution, global climate change and oil spills. This special volume of Applied Geochemistry, devoted to “Environmental Issues Related to Oil and Gas Exploration and Production”, does not address these major impacts directly because air pollution and global climate change are issues related primarily to the burning of petroleum and coal, and major oil spills generally occur during ocean transport, such as the Exxon Valdez 1989 spill of 42,000 m3 (260,000 bbl) oil into Prince William Sound, Alaska.
These reports provide summaries of the scoping meetings as part of the Supplemental Environmental Impact Statement (SEIS) process for the designation of dredged material disposal sites in Eastern Long Island Sound.
AN ASSESSMENT OF THE ECOLOGICAL CONDITION OF LONG ISLAND SOUND 1990-1993
Data from the Environmental Protection Agency's (EPA) Environmental Monitoring and Assessment Program (EMAP) from 1990 to 1993 were used to assess the condition of the Long Island Sound (LIS) estuary. Ambient water, sediment and biota were collected during the summer months from ...
Worthmann, Brian M; Song, H C; Dowling, David R
2015-12-01
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.
Quiet as an Environmental Value: A Contrast between Two Legislative Approaches
Thorne, Robert; Shepherd, Daniel
2013-01-01
This paper examines the concept of “quiet” as an “environmental value” in terms of amenity and wellbeing from a legislative context. Critical review of two pieces of environmental legislation from Australia and New Zealand forms the basis of the paper. The Australian legislation is Queensland’s Environmental Protection Act, and the New Zealand legislation is that nation’s Resource Management Act. Quiet is part of the psychoacoustic continuum between a tranquil and an intrusively noisy sound environment. As such, quiet possesses intrinsic value in terms of overall sound within the environment (soundscape) and to individuals and communities. In both pieces of legislation, guidance, either directly or indirectly, is given to “maximum” sound levels to describe the acoustic environment. Only in Queensland is wellbeing and amenity described as environmental values, while in the New Zealand approach, amenity is identified as the core value to defend, but guidance is not well established. Wellbeing can be related to degrees of quietness and the absence of intrusive noise, the character of sound within an environment (“soundscape”), as well as the overall level of sound. The quality of life experienced by individuals is related to that person’s physical and mental health, sense of amenity and wellbeing. These characteristics can be described in terms of subjective and objective measures, though legislation does not always acknowledge the subjective. PMID:23823712
Development of an ICT-Based Air Column Resonance Learning Media
NASA Astrophysics Data System (ADS)
Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut
2016-08-01
Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.
Egocentric and allocentric representations in auditory cortex
Brimijoin, W. Owen; Bizley, Jennifer K.
2017-01-01
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796
40 CFR 211.206 - Methods for measurement of sound attenuation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Methods for measurement of sound attenuation. 211.206 Section 211.206 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... measurement of sound attenuation. ...
40 CFR 211.206 - Methods for measurement of sound attenuation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Methods for measurement of sound attenuation. 211.206 Section 211.206 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... measurement of sound attenuation. ...
40 CFR 211.206 - Methods for measurement of sound attenuation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Methods for measurement of sound attenuation. 211.206 Section 211.206 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... measurement of sound attenuation. ...
40 CFR 211.206 - Methods for measurement of sound attenuation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Methods for measurement of sound attenuation. 211.206 Section 211.206 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... measurement of sound attenuation. ...
40 CFR 211.206 - Methods for measurement of sound attenuation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Methods for measurement of sound attenuation. 211.206 Section 211.206 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... measurement of sound attenuation. ...
How the owl tracks its prey – II
Takahashi, Terry T.
2010-01-01
Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819
Design of laser monitoring and sound localization system
NASA Astrophysics Data System (ADS)
Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang
2013-08-01
In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.
During 2001, phytoplankton dynamics, physiology, and related environmental conditions were studied in Santa Rosa Sound, Florida, USA, at near-weekly intervals. Santa Rosa Sound is a component of the Pensacola Bay system located in the northern Gulf of Mexico. Environmental parame...
[Influence of environmental noise on sleep quality and sleeping disorders-implications for health].
Kohlhuber, M; Bolte, G
2011-12-01
Environmental noise is a well-known risk factor influencing sleep-wake behavior and sleep quality. Epidemiologic studies have shown that environmental noise is regarded as the most annoying environmental factor. Noise causes modifications in physiologic and mental functions and may result in health outcomes like elevated blood pressure and ischemic heart disease. Reactions to high sound levels during sleep are decreased sleep intensity, arousals, and increased stress hormone secretion. Effects of poor sleep quality are reduced cognitive performance, tiredness, and psychosomatic symptoms. Long-term consequences of recurrent sleep loss due to environmental noise may be heart disease and increased medication intake. Arousals occur especially due to single noise events and intermittent noise. Laboratory and field studies showed no habituation of physiologic parameters to high sound levels. Sleep is especially sensitive to noise; therefore, sound levels during nighttime should be much lower than during daytime.
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
Drive-by large-region acoustic noise-source mapping via sparse beamforming tomography.
Tuna, Cagdas; Zhao, Shengkui; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2016-10-01
Environmental noise is a risk factor for human physical and mental health, demanding an efficient large-scale noise-monitoring scheme. The current technology, however, involves extensive sound pressure level (SPL) measurements at a dense grid of locations, making it impractical on a city-wide scale. This paper presents an alternative approach using a microphone array mounted on a moving vehicle to generate two-dimensional acoustic tomographic maps that yield the locations and SPLs of the noise-sources sparsely distributed in the neighborhood traveled by the vehicle. The far-field frequency-domain delay-and-sum beamforming output power values computed at multiple locations as the vehicle drives by are used as tomographic measurements. The proposed method is tested with acoustic data collected by driving an electric vehicle with a rooftop-mounted microphone array along a straight road next to a large open field, on which various pre-recorded noise-sources were produced by a loudspeaker at different locations. The accuracy of the tomographic imaging results demonstrates the promise of this approach for rapid, low-cost environmental noise-monitoring.
Assessment of Hydroacoustic Propagation Using Autonomous Hydrophones in the Scotia Sea
2010-09-01
Award No. DE-AI52-08NA28654 Proposal No. BAA08-36 ABSTRACT The remote area of the Atlantic Ocean near the Antarctic Peninsula and the South...hydroacoustic blind spot. To investigate the sound propagation and interferences affected by these landmasses in the vicinity of the Antarctic polar...from large icebergs (near-surface sources) were utilized as natural sound sources. Surface sound sources, e.g., ice-related events, tend to suffer less
Active control of noise on the source side of a partition to increase its sound isolation
NASA Astrophysics Data System (ADS)
Tarabini, Marco; Roure, Alain; Pinhede, Cedric
2009-03-01
This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.
The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank
NASA Astrophysics Data System (ADS)
Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing
2018-03-01
In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.
NASA Astrophysics Data System (ADS)
Montazeri, Allahyar; Taylor, C. James
2017-10-01
This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.
Over the past 3 years the Long Island Sound Study (LISS) has been developing a revised Comprehensive Conservation and Management Plan (CCMP), the blueprint for the protection and restoration of the Sound for the next generation. Long Island Sound is located within the most densel...
Consistent modelling of wind turbine noise propagation from source to receiver.
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick
2017-11-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.
Consistent modelling of wind turbine noise propagation from source to receiver
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...
2017-11-28
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Consistent modelling of wind turbine noise propagation from source to receiver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Broad band sound from wind turbine generators
NASA Technical Reports Server (NTRS)
Hubbard, H. H.; Shepherd, K. P.; Grosveld, F. W.
1981-01-01
Brief descriptions are given of the various types of large wind turbines and their sound characteristics. Candidate sources of broadband sound are identified and are rank ordered for a large upwind configuration wind turbine generator for which data are available. The rotor is noted to be the main source of broadband sound which arises from inflow turbulence and from the interactions of the turbulent boundary layer on the blade with its trailing edge. Sound is radiated about equally in all directions but the refraction effects of the wind produce an elongated contour pattern in the downwind direction.
Considering the influence of artificial environmental noise to study cough time-frequency features
NASA Astrophysics Data System (ADS)
Van Hirtum, A.; Berckmans, D.
2003-09-01
In general the study of the cough mechanism and sound in both animal and human is performed by eliciting coughing in a reproducible way by nebulization of an irritating substance. Due to ventilation the controlled evaporation-protocol causes artificial noises from a mechanical origin. The resulting environmental low-frequency noises complicate cough time-frequency features. In order to optimize the study of the cough-sound the research described in this paper attempts on the one hand to characterize and model the environmental noises and on the other hand to evaluate the influence of the noise on the time-frequency representation for the intended cough sounds by comparing different de-noising approaches. Free field acoustic sound is continuously registered during 30 min citric acid cough-challenges on individual Belgian Landrace piglets and during respiratory infection experiments, with a duration of about 10 days, where room-ventilation was present.
Effects of sound source directivity on auralizations
NASA Astrophysics Data System (ADS)
Sheets, Nathan W.; Wang, Lily M.
2002-05-01
Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.
Development of a directivity-controlled piezoelectric transducer for sound reproduction
NASA Astrophysics Data System (ADS)
Bédard, Magella; Berry, Alain
2008-04-01
Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.
Callback response of dugongs to conspecific chirp playbacks.
Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Adulyanukosol, Kanjana; Arai, Nobuaki
2011-06-01
Dugongs (Dugong dugon) produce bird-like calls such as chirps and trills. The vocal responses of dugongs to playbacks of several acoustic stimuli were investigated. Animals were exposed to four different playback stimuli: a recorded chirp from a wild dugong, a synthesized down-sweep sound, a synthesized constant-frequency sound, and silence. Wild dugongs vocalized more frequently after playback of broadcast chirps than that after constant-frequency sounds or silence. The down-sweep sound also elicited more vocal responses than did silence. No significant difference was found between the broadcast chirps and the down-sweep sound. The ratio of wild dugong chirps to all calls and the dominant frequencies of the wild dugong calls were significantly higher during playbacks of broadcast chirps, down-sweep sounds, and constant-frequency sounds than during those of silence. The source level and duration of dugong chirps increased significantly as signaling distance increased. No significant correlation was found between signaling distance and the source level of trills. These results show that dugongs vocalize to playbacks of frequency-modulated signals and suggest that the source level of dugong chirps may be manipulated to compensate for transmission loss between the source and receiver. This study provides the first behavioral observations revealing the function of dugong chirps. © 2011 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Embleton, Tony F. W.; Daigle, Gilles A.
1991-01-01
Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.
Wensveen, Paul J; von Benda-Beckmann, Alexander M; Ainslie, Michael A; Lam, Frans-Peter A; Kvadsheim, Petter H; Tyack, Peter L; Miller, Patrick J O
2015-05-01
The behaviour of a marine mammal near a noise source can modulate the sound exposure it receives. We demonstrate that two long-finned pilot whales both surfaced in synchrony with consecutive arrivals of multiple sonar pulses. We then assess the effect of surfacing and other behavioural response strategies on the received cumulative sound exposure levels and maximum sound pressure levels (SPLs) by modelling realistic spatiotemporal interactions of a pilot whale with an approaching source. Under the propagation conditions of our model, some response strategies observed in the wild were effective in reducing received levels (e.g. movement perpendicular to the source's line of approach), but others were not (e.g. switching from deep to shallow diving; synchronous surfacing after maximum SPLs). Our study exemplifies how simulations of source-whale interactions guided by detailed observational data can improve our understanding about motivations behind behaviour responses observed in the wild (e.g., reducing sound exposure, prey movement). Copyright © 2015 Elsevier Ltd. All rights reserved.
Litovsky, Ruth Y.; Godar, Shelly P.
2010-01-01
The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369
Hermannsen, Line; Beedholm, Kristian
2017-01-01
Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155
Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section
NASA Technical Reports Server (NTRS)
Brooks, T. F.; Scheiman, J.; Silcox, R. J.
1976-01-01
Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.
Swanepoel, De Wet; Matthysen, Cornelia; Eikelboom, Robert H; Clark, Jackie L; Hall, James W
2015-01-01
Accessibility of audiometry is hindered by the cost of sound booths and shortage of hearing health personnel. This study investigated the validity of an automated mobile diagnostic audiometer with increased attenuation and real-time noise monitoring for clinical testing outside a sound booth. Attenuation characteristics and reference ambient noise levels for the computer-based audiometer (KUDUwave) was evaluated alongside the validity of environmental noise monitoring. Clinical validity was determined by comparing air- and bone-conduction thresholds obtained inside and outside the sound booth (23 subjects). Twenty-three normal-hearing subjects (age range, 20-75 years; average age 35.5) and a sub group of 11 subjects to establish test-retest reliability. Improved passive attenuation and valid environmental noise monitoring was demonstrated. Clinically, air-conduction thresholds inside and outside the sound booth, corresponded within 5 dB or less > 90% of instances (mean absolute difference 3.3 ± 3.2 SD). Bone conduction thresholds corresponded within 5 dB or less in 80% of comparisons between test environments, with a mean absolute difference of 4.6 dB (3.7 SD). Threshold differences were not statistically significant. Mean absolute test-retest differences outside the sound booth was similar to those in the booth. Diagnostic pure-tone audiometry outside a sound booth, using automated testing, improved passive attenuation, and real-time environmental noise monitoring demonstrated reliable hearing assessments.
Andrews, John T.; Barber, D.C.; Jennings, A.E.; Eberl, D.D.; Maclean, B.; Kirby, M.E.; Stoner, J.S.
2012-01-01
Core HU97048-007PC was recovered from the continental Labrador Sea slope at a water depth of 945 m, 250 km seaward from the mouth of Cumberland Sound, and 400 km north of Hudson Strait. Cumberland Sound is a structural trough partly floored by Cretaceous mudstones and Paleozoic carbonates. The record extends from ∼10 to 58 ka. On-board logging revealed a complex series of lithofacies, including buff-colored detrital carbonate-rich sediments [Heinrich (H)-events] frequently bracketed by black facies. We investigate the provenance of these facies using quantitative X-ray diffraction on drill-core samples from Paleozoic and Cretaceous bedrock from the SE Baffin Island Shelf, and on the < 2-mm sediment fraction in a transect of five cores from Cumberland Sound to the NW Labrador Sea. A sediment unmixing program was used to discriminate between sediment sources, which included dolomite-rich sediments from Baffin Bay, calcite-rich sediments from Hudson Strait and discrete sources from Cumberland Sound. Results indicated that the bulk of the sediment was derived from Cumberland Sound, but Baffin Bay contributed to sediments coeval with H-0 (Younger Dryas), whereas Hudson Strait was the source during H-events 1–4. Contributions from the Cretaceous outcrops within Cumberland Sound bracket H-events, thus both leading and lagging Hudson Strait-sourced H-events.
Riede, Tobias; Goller, Franz
2010-10-01
Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.
The auditory P50 component to onset and offset of sound
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi
2008-01-01
Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255
Blind separation of incoherent and spatially disjoint sound sources
NASA Astrophysics Data System (ADS)
Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter
2016-11-01
Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.
Urban tree-planting programs — A model for encouraging environmentally protective behavior
NASA Astrophysics Data System (ADS)
Summit, Joshua; Sommer, Robert
Efforts to increase environmentally sound behaviors and practices have in the past often focussed on consciousness-raising and attitude change. Research indicates that such efforts are less effective than interventions designed to make environmentally sound behaviors easier to engage in, or to make personal advantages resulting from such behaviors more clear to individuals. Four nonprofit tree planting organizations were studied as examples of successful environmental interventions. From these studies, as well as a review of the literature, several principles underlying successful behavioral interventions are identified. Implications of these principles for future environmental programs are discussed.
2011-09-30
capability to emulate the dive and movement behavior of marine mammals provides a significant advantage to modeling environmental impact than do historic...approaches used in Navy environmental assessments (EA) and impact statements (EIS). Many previous methods have been statistical or pseudo-statistical...Siderius. 2011. Comparison of methods used for computing the impact of sound on the marine environment, Marine Environmental Research, 71:342-350. [published
A New Mechanism of Sound Generation in Songbirds
NASA Astrophysics Data System (ADS)
Goller, Franz; Larsen, Ole N.
1997-12-01
Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.
On the role of glottis-interior sources in the production of voiced sound.
Howe, M S; McGowan, R S
2012-02-01
The voice source is dominated by aeroacoustic sources downstream of the glottis. In this paper an investigation is made of the contribution to voiced speech of secondary sources within the glottis. The acoustic waveform is ultimately determined by the volume velocity of air at the glottis, which is controlled by vocal fold vibration, pressure forcing from the lungs, and unsteady backreactions from the sound and from the supraglottal air jet. The theory of aerodynamic sound is applied to study the influence on the fine details of the acoustic waveform of "potential flow" added-mass-type glottal sources, glottis friction, and vorticity either in the glottis-wall boundary layer or in the portion of the free jet shear layer within the glottis. These sources govern predominantly the high frequency content of the sound when the glottis is near closure. A detailed analysis performed for a canonical, cylindrical glottis of rectangular cross section indicates that glottis-interior boundary/shear layer vortex sources and the surface frictional source are of comparable importance; the influence of the potential flow source is about an order of magnitude smaller. © 2012 Acoustical Society of America
Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2
NASA Technical Reports Server (NTRS)
Zhang, Weiguo; Raveendra, Ravi
2014-01-01
Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.
Andreeva, I G; Vartanian, I A
2012-01-01
The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.
Interior and exterior sound field control using general two-dimensional first-order sources.
Poletti, M A; Abhayapala, T D
2011-01-01
Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.
The silent base flow and the sound sources in a laminar jet.
Sinayoko, Samuel; Agarwal, Anurag
2012-03-01
An algorithm to compute the silent base flow sources of sound in a jet is introduced. The algorithm is based on spatiotemporal filtering of the flow field and is applicable to multifrequency sources. It is applied to an axisymmetric laminar jet and the resulting sources are validated successfully. The sources are compared to those obtained from two classical acoustic analogies, based on quiescent and time-averaged base flows. The comparison demonstrates how the silent base flow sources shed light on the sound generation process. It is shown that the dominant source mechanism in the axisymmetric laminar jet is "shear-noise," which is a linear mechanism. The algorithm presented here could be applied to fully turbulent flows to understand the aerodynamic noise-generation mechanism. © 2012 Acoustical Society of America
Aviation Noise Impacts: State of the Science.
Basner, Mathias; Clark, Charlotte; Hansell, Anna; Hileman, James I; Janssen, Sabine; Shepherd, Kevin; Sparrow, Victor
2017-01-01
Noise is defined as "unwanted sound." Aircraft noise is one, if not the most detrimental environmental effect of aviation. It can cause community annoyance, disrupt sleep, adversely affect academic performance of children, and could increase the risk for cardiovascular disease of people living in the vicinity of airports. In some airports, noise constrains air traffic growth. This consensus paper was prepared by the Impacts of Science Group of the Committee for Aviation Environmental Protection of the International Civil Aviation Organization and summarizes the state of the science of noise effects research in the areas of noise measurement and prediction, community annoyance, children's learning, sleep disturbance, and health. It also briefly discusses civilian supersonic aircraft as a future source of aviation noise.
Aviation Noise Impacts: State of the Science
Basner, Mathias; Clark, Charlotte; Hansell, Anna; Hileman, James I.; Janssen, Sabine; Shepherd, Kevin; Sparrow, Victor
2017-01-01
Noise is defined as “unwanted sound.” Aircraft noise is one, if not the most detrimental environmental effect of aviation. It can cause community annoyance, disrupt sleep, adversely affect academic performance of children, and could increase the risk for cardiovascular disease of people living in the vicinity of airports. In some airports, noise constrains air traffic growth. This consensus paper was prepared by the Impacts of Science Group of the Committee for Aviation Environmental Protection of the International Civil Aviation Organization and summarizes the state of the science of noise effects research in the areas of noise measurement and prediction, community annoyance, children’s learning, sleep disturbance, and health. It also briefly discusses civilian supersonic aircraft as a future source of aviation noise. PMID:29192612
Sound Radiated by a Wave-Like Structure in a Compressible Jet
NASA Technical Reports Server (NTRS)
Golubev, V. V.; Prieto, A. F.; Mankbadi, R. R.; Dahl, M. D.; Hixon, R.
2003-01-01
This paper extends the analysis of acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. Compared to previous work, a modified approach to the sound source modeling is examined that employs a set of solutions to linearized Euler equations. The sound radiation is then calculated using an integral surface method.
Nansai, Keisuke; Kagawa, Shigemi; Suh, Sangwon; Inaba, Rokuta; Moriguchi, Yuichi
2007-02-15
Today's material welfare has been achieved at the expense of consumption of finite resources and generation of environmental burdens. Over the past few decades the volume of global consumption has grown dramatically, while at the same time technological advances have enabled products with greater efficiencies. These two directions of change, consumption growth and technological advance, are the foci of the present paper. Using quantitative measures for these two factors, we define a new indicator, "eco-velocity of consumption", analogous to velocity in physics. The indicator not only identifies the environmental soundness of consumption growth and technological advance but also indicates whether and to what extent our society is shifting toward sustainable consumption. This study demonstrates the practicability of the indicator through a case study in which we calculate the eco-velocities of Japanese household consumption in 2 years: 1995 and 2000. The rate of technological advance during the periods concerned is quantified in terms of the embodied carbon dioxide emission per yen of product. The results show that the current growth rate of Japanese household consumption is greater than the rate of technological advance to mitigate carbon dioxide emissions. The eco-velocities at the level of individual commodity groups are also examined, and the sources of changes in eco-velocity for each commodity are identified using structural decomposition analysis.
Photoacoustic Effect Generated from an Expanding Spherical Source
NASA Astrophysics Data System (ADS)
Bai, Wenyu; Diebold, Gerald J.
2018-02-01
Although the photoacoustic effect is typically generated by amplitude-modulated continuous or pulsed radiation, the form of the wave equation for pressure that governs the generation of sound indicates that optical sources moving in an absorbing fluid can produce sound as well. Here, the characteristics of the acoustic wave produced by a radially symmetric Gaussian source expanding outwardly from the origin are found. The unique feature of the photoacoustic effect from the spherical source is a trailing compressive wave that arises from reflection of an inwardly propagating component of the wave. Similar to the one-dimensional geometry, an unbounded amplification effect is found for the Gaussian source expanding at the sound speed.
An Interactive Neural Network System for Acoustic Signal Classification
1990-02-28
of environmental sounds. These include machinery noise ( Talamo , 1982), the sounds of metallic (Howard, 1983) and non-metallic impacts (Warren...backscattering of sound by spherical and elongated objects. JASA, 86, 1499-1510. Talamo , J. D. C. (1982). The perception of machinery indicator
NASA Astrophysics Data System (ADS)
Kozuka, Teruyuki; Yasui, Kyuichi; Tuziuti, Toru; Towata, Atsuya; Lee, Judy; Iida, Yasuo
2009-07-01
Using a standing-wave field generated between a sound source and a reflector, it is possible to trap small objects at nodes of the sound pressure distribution in air. In this study, a sound field generated under a flat or concave reflector was studied by both experimental measurement and numerical calculation. The calculated result agrees well with the experimental data. The maximum force generated between a sound source of 25.0 mm diameter and a concave reflector is 0.8 mN in the experiment. A steel ball of 2.0 mm in diameter was levitated in the sound field in air.
Sound field reproduction as an equivalent acoustical scattering problem.
Fazi, Filippo Maria; Nelson, Philip A
2013-11-01
Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.
Investigation of spherical loudspeaker arrays for local active control of sound.
Peleg, Tomer; Rafaely, Boaz
2011-10-01
Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America
Efficient techniques for wave-based sound propagation in interactive applications
NASA Astrophysics Data System (ADS)
Mehra, Ravish
Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.
Echolocation versus echo suppression in humans
Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz
2013-01-01
Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105
Two dimensional sound field reproduction using higher order sources to exploit room reflections.
Betlehem, Terence; Poletti, Mark A
2014-04-01
In this paper, sound field reproduction is performed in a reverberant room using higher order sources (HOSs) and a calibrating microphone array. Previously a sound field was reproduced with fixed directivity sources and the reverberation compensated for using digital filters. However by virtue of their directive properties, HOSs may be driven to not only avoid the creation of excess reverberation but also to use room reflection to contribute constructively to the desired sound field. The manner by which the loudspeakers steer the sound around the room is determined by measuring the acoustic transfer functions. The requirements on the number and order N of HOSs for accurate reproduction in a reverberant room are derived, showing a 2N + 1-fold decrease in the number of loudspeakers in comparison to using monopole sources. HOSs are shown applicable to rooms with a rich variety of wall reflections while in an anechoic room their advantages may be lost. Performance is investigated in a room using extensions of both the diffuse field model and a more rigorous image-source simulation method, which account for the properties of the HOSs. The robustness of the proposed method is validated by introducing measurement errors.
Framing sound: Using expectations to reduce environmental noise annoyance.
Crichton, Fiona; Dodd, George; Schmid, Gian; Petrie, Keith J
2015-10-01
Annoyance reactions to environmental noise, such as wind turbine sound, have public health implications given associations between annoyance and symptoms related to psychological distress. In the case of wind farms, factors contributing to noise annoyance have been theorised to include wind turbine sound characteristics, the noise sensitivity of residents, and contextual aspects, such as receiving information creating negative expectations about sound exposure. The experimental aim was to assess whether receiving positive or negative expectations about wind farm sound would differentially influence annoyance reactions during exposure to wind farm sound, and also influence associations between perceived noise sensitivity and noise annoyance. Sixty volunteers were randomly assigned to receive either negative or positive expectations about wind farm sound. Participants in the negative expectation group viewed a presentation which incorporated internet material indicating that exposure to wind turbine sound, particularly infrasound, might present a health risk. Positive expectation participants viewed a DVD which framed wind farm sound positively and included internet information about the health benefits of infrasound exposure. Participants were then simultaneously exposed to sub-audible infrasound and audible wind farm sound during two 7 min exposure sessions, during which they assessed their experience of annoyance. Positive expectation participants were significantly less annoyed than negative expectation participants, while noise sensitivity only predicted annoyance in the negative group. Findings suggest accessing negative information about sound is likely to trigger annoyance, particularly in noise sensitive people and, importantly, portraying sound positively may reduce annoyance reactions, even in noise sensitive individuals. Copyright © 2015 Elsevier Inc. All rights reserved.
77 FR 35852 - Safety Zones; Multiple Firework Displays in Captain of the Port, Puget Sound Zone
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-15
... 13045, Protection of Children from Environmental Health Risks and Safety Risks. This rule is not an economically significant rule and does not create an environmental risk to health or risk to safety that may... 1625-AA00 Safety Zones; Multiple Firework Displays in Captain of the Port, Puget Sound Zone AGENCY...
1990-06-01
of transceivers used and the characteristics of the sound channel. In the assessment we use the General Digital Environmental Model ( GDEM ), a...sound channel. In the assessment we use the General Digital Environmental Model ( GDEM ), a climatological data base, to simulate an ocean area 550 x 550...34 3. GDEM Data Base
Environmentally Sound Small-Scale Livestock Projects. Guidelines for Planning Series Number 5.
ERIC Educational Resources Information Center
Jacobs, Linda
This document was developed in response to the need for simplified technical information for planning environmentally sound small-scale projects in third world countries. It is aimed specifically at those who are planning or managing small-scale livestock projects in less-developed areas of the tropics and sub-tropics. The guidelines included in…
Seismic and Biological Sources of Ambient Ocean Sound
NASA Astrophysics Data System (ADS)
Freeman, Simon Eric
Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.
Calculating far-field radiated sound pressure levels from NASTRAN output
NASA Technical Reports Server (NTRS)
Lipman, R. R.
1986-01-01
FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.
Smith, Rosanna C G; Price, Stephen R
2014-01-01
Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.
The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes
Gygi, Brian; Shafiro, Valeriy
2011-01-01
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about 5 percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naïve (untrained) listeners showed that this Incongruency Advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of −7.5 dB, but there is about 5 percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the Incongruency Advantage is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. PMID:21355664
Graphene-on-paper sound source devices.
Tian, He; Ren, Tian-Ling; Xie, Dan; Wang, Yu-Feng; Zhou, Chang-Jian; Feng, Ting-Ting; Fu, Di; Yang, Yi; Peng, Ping-Gang; Wang, Li-Gang; Liu, Li-Tian
2011-06-28
We demonstrate an interesting phenomenon that graphene can emit sound. The application of graphene can be expanded in the acoustic field. Graphene-on-paper sound source devices are made by patterning graphene on paper substrates. Three graphene sheet samples with the thickness of 100, 60, and 20 nm were fabricated. Sound emission from graphene is measured as a function of power, distance, angle, and frequency in the far-field. The theoretical model of air/graphene/paper/PCB board multilayer structure is established to analyze the sound directivity, frequency response, and efficiency. Measured sound pressure level (SPL) and efficiency are in good agreement with theoretical results. It is found that graphene has a significant flat frequency response in the wide ultrasound range 20-50 kHz. In addition, the thinner graphene sheets can produce higher SPL due to its lower heat capacity per unit area (HCPUA). The infrared thermal images reveal that a thermoacoustic effect is the working principle. We find that the sound performance mainly depends on the HCPUA of the conductor and the thermal properties of the substrate. The paper-based graphene sound source devices have highly reliable, flexible, no mechanical vibration, simple structure and high performance characteristics. It could open wide applications in multimedia, consumer electronics, biological, medical, and many other areas.
Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.
Firtha, Gergely; Fiala, Péter
2017-08-01
The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.
Grieco-Calub, Tina M.; Litovsky, Ruth Y.
2010-01-01
Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615
NASA Technical Reports Server (NTRS)
Rentz, P. E.
1976-01-01
Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.
Reduced order modeling of head related transfer functions for virtual acoustic displays
NASA Astrophysics Data System (ADS)
Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley
2003-04-01
The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
WODA Technical Guidance on Underwater Sound from Dredging.
Thomsen, Frank; Borsani, Fabrizio; Clarke, Douglas; de Jong, Christ; de Wit, Pim; Goethals, Fredrik; Holtkamp, Martine; Martin, Elena San; Spadaro, Philip; van Raalte, Gerard; Victor, George Yesu Vedha; Jensen, Anders
2016-01-01
The World Organization of Dredging Associations (WODA) has identified underwater sound as an environmental issue that needs further consideration. A WODA Expert Group on Underwater Sound (WEGUS) prepared a guidance paper in 2013 on dredging sound, including a summary of potential impacts on aquatic biota and advice on underwater sound monitoring procedures. The paper follows a risk-based approach and provides guidance for standardization of acoustic terminology and methods for data collection and analysis. Furthermore, the literature on dredging-related sounds and the effects of dredging sounds on marine life is surveyed and guidance on the management of dredging-related sound risks is provided.
Auditory Localization: An Annotated Bibliography
1983-11-01
tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical
78 FR 40196 - National Environmental Policy Act; Sounding Rockets Program; Poker Flat Research Range
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-03
...; Sounding Rockets Program; Poker Flat Research Range AGENCY: National Aeronautics and Space Administration... Sounding Rockets Program (SRP) at Poker Flat Research Range (PFRR), Alaska. SUMMARY: Pursuant to the... government agencies, and educational institutions have conducted suborbital rocket launches from the PFRR...
Detection of Sound Image Movement During Horizontal Head Rotation
Ohba, Kagesho; Iwaya, Yukio; Suzuki, Yôiti
2016-01-01
Movement detection for a virtual sound source was measured during the listener’s horizontal head rotation. Listeners were instructed to do head rotation at a given speed. A trial consisted of two intervals. During an interval, a virtual sound source was presented 60° to the right or left of the listener, who was instructed to rotate the head to face the sound image position. Then in one of a pair of intervals, the sound position was moved slightly in the middle of the rotation. Listeners were asked to judge the interval in a trial during which the sound stimuli moved. Results suggest that detection thresholds are higher when listeners do head rotation. Moreover, this effect was found to be independent of the rotation velocity. PMID:27698993
Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles
2011-11-01
Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.
A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS
NASA Astrophysics Data System (ADS)
Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto
At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.
Mapping the sound field of an erupting submarine volcano using an acoustic glider.
Matsumoto, Haru; Haxel, Joseph H; Dziak, Robert P; Bohnenstiehl, Delwayne R; Embley, Robert W
2011-03-01
An underwater glider with an acoustic data logger flew toward a recently discovered erupting submarine volcano in the northern Lau basin. With the volcano providing a wide-band sound source, recordings from the two-day survey produced a two-dimensional sound level map spanning 1 km (depth) × 40 km(distance). The observed sound field shows depth- and range-dependence, with the first-order spatial pattern being consistent with the predictions of a range-dependent propagation model. The results allow constraining the acoustic source level of the volcanic activity and suggest that the glider provides an effective platform for monitoring natural and anthropogenic ocean sounds. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Sridhara, Basavapatna Sitaramaiah
In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.
Ambient Sound-Based Collaborative Localization of Indeterministic Devices
Kamminga, Jacob; Le, Duc; Havinga, Paul
2016-01-01
Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176
Amplitude and Wavelength Measurement of Sound Waves in Free Space using a Sound Wave Phase Meter
NASA Astrophysics Data System (ADS)
Ham, Sounggil; Lee, Kiwon
2018-05-01
We developed a sound wave phase meter (SWPM) and measured the amplitude and wavelength of sound waves in free space. The SWPM consists of two parallel metal plates, where the front plate was operated as a diaphragm. An aluminum perforated plate was additionally installed in front of the diaphragm, and the same signal as that applied to the sound source was applied to the perforated plate. The SWPM measures both the sound wave signal due to the diaphragm vibration and the induction signal due to the electric field of the aluminum perforated plate. Therefore, the two measurement signals interfere with each other due to the phase difference according to the distance between the sound source and the SWPM, and the amplitude of the composite signal that is output as a result is periodically changed. We obtained the wavelength of the sound wave from this periodic amplitude change measured in the free space and compared it with the theoretically calculated values.
Environmental acoustic cues guide the biosonar attention of a highly specialised echolocator.
Lattenkamp, Ella Z; Kaiser, Samuel; Kaučič, Rožle; Großmann, Martina; Koselj, Klemen; Goerlitz, Holger R
2018-04-23
Sensory systems experience a trade-off between maximizing the detail and amount of sampled information. This trade-off is particularly pronounced in sensory systems that are highly specialised for a single task and thus experience limitations in other tasks. We hypothesised that combining sensory input from multiple streams of information may resolve this trade-off and improve detection and sensing reliability. Specifically, we predicted that perceptive limitations experienced by animals reliant on specialised active echolocation can be compensated for by the phylogenetically older and less specialised process of passive hearing. We tested this hypothesis in greater horseshoe bats, which possess morphological and neural specialisations allowing them to identify fluttering prey in dense vegetation using echolocation only. At the same time, their echolocation system is both spatially and temporally severely limited. Here, we show that greater horseshoe bats employ passive hearing to initially detect and localise prey-generated and other environmental sounds, and then raise vocalisation level and concentrate the scanning movements of their sonar beam on the sound source for further investigation with echolocation. These specialised echolocators thus supplement echo-acoustic information with environmental acoustic cues, enlarging perceived space beyond their biosonar range. Contrary to our predictions, we did not find consistent preferences for prey-related acoustic stimuli, indicating the use of passive acoustic cues also for detection of non-prey objects. Our findings suggest that even specialised echolocators exploit a wide range of environmental information, and that phylogenetically older sensory systems can support the evolution of sensory specialisations by compensating for their limitations. © 2018. Published by The Company of Biologists Ltd.
2012-03-12
column than sounds with lower frequencies ( Urick , 1983). Additionally, these systems are generally operated in the vicinity of the sea floor, thus...Water,” TR-76-116, Naval Surface Weapons Center, White Oak, Silver Springs, MD. Urick , R. J. (1983), Principles of Underwater Sound, McGraw-Hill
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-14
... require hybrid and electric passenger cars, light trucks, medium and heavy duty trucks and buses, low... Sound Requirements for Hybrid and Electric Vehicles AGENCY: National Highway Traffic Safety... minimum sound requirements for hybrid and electric vehicles. DATES: Comments must be received on or before...
77 FR 61642 - National Environmental Policy Act; Sounding Rockets Program; Poker Flat Research Range
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-10
...; Sounding Rockets Program; Poker Flat Research Range AGENCY: National Aeronautics and Space Administration... Sounding Rockets Program (SRP) at Poker Flat Research Range (PFRR), Alaska. SUMMARY: Pursuant to the... educational institutions have conducted suborbital rocket launches from the PFRR. While the PFRR is owned and...
Localizing nearby sound sources in a classroom: Binaural room impulse responses
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .
Localizing nearby sound sources in a classroom: binaural room impulse responses.
Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandenberger, Jill M.; Suslick, Carolynn R.; Johnston, Robert K.
2008-10-09
Evaluating spatial and temporal trends in contaminant residues in Puget Sound fish and macroinvertebrates are the objectives of the Puget Sound Ambient Monitoring Program (PSAMP). In a cooperative effort between the ENVironmental inVESTment group (ENVVEST) and Washington State Department of Fish and Wildlife, additional biota samples were collected during the 2007 PSAMP biota survey and analyzed for chemical residues and stable isotopes of carbon (δ13C) and nitrogen (δ15N). Approximately three specimens of each species collected from Sinclair Inlet, Georgia Basin, and reference locations in Puget Sound were selected for whole body chemical analysis. The muscle tissue of specimens selected formore » chemical analyses were also analyzed for δ13C and δ15N to provide information on relative trophic level and food sources. This data report summarizes the chemical residues for the 2007 PSAMP fish and macro-invertebrate samples. In addition, six Spiny Dogfish (Squalus acanthias) samples were necropsied to evaluate chemical residue of various parts of the fish (digestive tract, liver, embryo, muscle tissue), as well as, a weight proportional whole body composite (WBWC). Whole organisms were homogenized and analyzed for silver, arsenic, cadmium, chromium, copper, nickel, lead, zinc, mercury, 19 polychlorinated biphenyl (PCB) congeners, PCB homologues, percent moisture, percent lipids, δ13C, and δ15N.« less
Wind- and Rain-Induced Vibrations Impose Different Selection Pressures on Multimodal Signaling.
Halfwerk, Wouter; Ryan, Michael J; Wilson, Preston S
2016-09-01
The world is a noisy place, and animals have evolved a myriad of strategies to communicate in it. Animal communication signals are, however, often multimodal; their components can be processed by multiple sensory systems, and noise can thus affect signal components across different modalities. We studied the effect of environmental noise on multimodal communication in the túngara frog (Physalaemus pustulosus). Males communicate with rivals using airborne sounds combined with call-induced water ripples. We tested males under control as well as noisy conditions in which we mimicked rain- and wind-induced vibrations on the water surface. Males responded more strongly to a multimodal playback in which sound and ripples were combined, compared to a unimodal sound-only playback, but only in the absence of rain and wind. Under windy conditions, males decreased their response to the multimodal playback, suggesting that wind noise interferes with the detection of rival ripples. Under rainy conditions, males increased their response, irrespective of signal playback, suggesting that different noise sources can have different impacts on communication. Our findings show that noise in an additional sensory channel can affect multimodal signal perception and thereby drive signal evolution, but not always in the expected direction.
Monitoring CO2 sources and sinks from space : the Orbiting Carbon Observatory (OCO) Mission
NASA Technical Reports Server (NTRS)
Crisp, David
2006-01-01
NASA's Orbiting Carbon Observatory (OCO) will make the first space-based measurements of atmospheric carbon dioxide (CO2) with the precision, resolution, and coverage needed to characterize the geographic distribution of CO2 sources and sinks and quantify their variability over the seasonal cycle. OCO is currently scheduled for launch in 2008. The observatory will carry a single instrument that incorporates three high-resolution grating spectrometers designed to measure the near-infrared absorption by CO2 and molecular oxygen (O2) in reflected sunlight. OCO will fly 12 minutes ahead of the EOS Aqua platform in the Earth Observing System (EOS) Afternoon Constellation (A-Train). The in-strument will collect 12 to 24 soundings per second as the Observatory moves along its orbit track on the day side of the Earth. A small sampling footprint (<3 km2 at nadir) was adopted to reduce biases in each sounding associated with clouds and aerosols and spatial variations in surface topography. A comprehensive ground-based validation program will be used to assess random errors and biases in the XCO2 product on regional to continental scales. Measurements collected by OCO will be assimilated with other environmental measurements to retrieve surface sources and sinks of CO2. This information could play an important role in monitoring the integrity of large scale CO2 sequestration projects.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.
Exploring positive hospital ward soundscape interventions.
Mackrill, J; Jennings, P; Cain, R
2014-11-01
Sound is often considered as a negative aspect of an environment that needs mitigating, particularly in hospitals. It is worthwhile however, to consider how subjective responses to hospital sounds can be made more positive. The authors identified natural sound, steady state sound and written sound source information as having the potential to do this. Listening evaluations were conducted with 24 participants who rated their emotional (Relaxation) and cognitive (Interest and Understanding) response to a variety of hospital ward soundscape clips across these three interventions. A repeated measures ANOVA revealed that the 'Relaxation' response was significantly affected (n(2) = 0.05, p = 0.001) by the interventions with natural sound producing a 10.1% more positive response. Most interestingly, written sound source information produced a 4.7% positive change in response. The authors conclude that exploring different ways to improve the sounds of a hospital offers subjective benefits that move beyond sound level reduction. This is an area for future work to focus upon in an effort to achieve more positively experienced hospital soundscapes and environments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Shock waves and the Ffowcs Williams-Hawkings equation
NASA Technical Reports Server (NTRS)
Isom, Morris P.; Yu, Yung H.
1991-01-01
The expansion of the double divergence of the generalized Lighthill stress tensor, which is the basis of the concept of the role played by shock and contact discontinuities as sources of dipole and monopole sound, is presently applied to the simplest transonic flows: (1) a fixed wing in steady motion, for which there is no sound field, and (2) a hovering helicopter blade that produces a sound field. Attention is given to the contribution of the shock to sound from the viewpoint of energy conservation; the shock emerges as the source of only the quantity of entropy.
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara
2003-04-01
One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.
NASA Technical Reports Server (NTRS)
Fuller, C. R.; Hansen, C. H.; Snyder, S. D.
1991-01-01
Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.
An Analysis of Microbial Pollution in the Sinclair-Dyes Inlet Watershed
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, Christopher W.; Cullinan, Valerie I.
2005-09-21
This assessment of fecal coliform sources and pathways in Sinclair and Dyes Inlets is part of the Project ENVironmental InVESTment (ENVVEST) being conducted by the Navy's Puget Sound Naval Shipyard and Intermediate Maintenance Facility in cooperation with the US Environmental Protection Agency, Washington State Department of Ecology, the Suquamish Tribe, Kitsap County, the City of Bremerton, the City of Port Orchard, and other local stakeholders. The goal of this study was to identify microbial pollution problems within the Sinclair-Dyes Inlet watershed and to provide a comprehensive assessment of fecal coliform (FC) contamination from all identifiable sources in the watershed. Thismore » study quantifies levels of contamination and estimated loadings from known sources within the watersheds and describes pollutant transport mechanisms found in the study area. In addition, the effectiveness of pollution prevention and mitigation measures currently in place within the Sinclair-Dyes Inlet watershed are discussed. This comprehensive study relies on historical data collected by several cooperating agencies, in addition to data collected during the study period from spring 2001 through summer 2005. This report is intended to provide the technical information needed to continue current water quality cleanup efforts and to help implement future efforts.« less
NASA Astrophysics Data System (ADS)
Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.
2017-12-01
The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.
Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David
2012-10-01
The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.
Using therapeutic sound with progressive audiologic tinnitus management.
Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A
2008-09-01
Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).
The directivity of the sound radiation from panels and openings.
Davy, John L
2009-06-01
This paper presents a method for calculating the directivity of the radiation of sound from a panel or opening, whose vibration is forced by the incidence of sound from the other side. The directivity of the radiation depends on the angular distribution of the incident sound energy in the room or duct in whose wall or end the panel or opening occurs. The angular distribution of the incident sound energy is predicted using a model which depends on the sound absorption coefficient of the room or duct surfaces. If the sound source is situated in the room or duct, the sound absorption coefficient model is used in conjunction with a model for the directivity of the sound source. For angles of radiation approaching 90 degrees to the normal to the panel or opening, the effect of the diffraction by the panel or opening, or by the finite baffle in which the panel or opening is mounted, is included. A simple empirical model is developed to predict the diffraction of sound into the shadow zone when the angle of radiation is greater than 90 degrees to the normal to the panel or opening. The method is compared with published experimental results.
Designing sound and visual components for enhancement of urban soundscapes.
Hong, Joo Young; Jeon, Jin Yong
2013-09-01
The aim of this study is to investigate the effect of audio-visual components on environmental quality to improve soundscape. Natural sounds with road traffic noise and visual components in urban streets were evaluated through laboratory experiments. Waterfall and stream water sounds, as well as bird sounds, were selected to enhance the soundscape. Sixteen photomontages of a streetscape were constructed in combination with two types of water features and three types of vegetation which were chosen as positive visual components. The experiments consisted of audio-only, visual-only, and audio-visual conditions. The preferences and environmental qualities of the stimuli were evaluated by a numerical scale and 12 pairs of adjectives, respectively. The results showed that bird sounds were the most preferred among the natural sounds, while the sound of falling water was found to degrade the soundscape quality when the road traffic noise level was high. The visual effects of vegetation on aesthetic preference were significant, but those of water features relatively small. It was revealed that the perceptual dimensions of the environment were different from the noise levels. Particularly, the acoustic comfort factor related to soundscape quality considerably influenced preference for the overall environment at a higher level of road traffic noise.
Environmental Predictors of Ice Seal Presence in the Bering Sea
Miksis-Olds, Jennifer L.
2014-01-01
Ice seals overwintering in the Bering Sea are challenged with foraging, finding mates, and maintaining breathing holes in a dark and ice covered environment. Due to the difficulty of studying these species in their natural environment, very little is known about how the seals navigate under ice. Here we identify specific environmental parameters, including components of the ambient background sound, that are predictive of ice seal presence in the Bering Sea. Multi-year mooring deployments provided synoptic time series of acoustic and oceanographic parameters from which environmental parameters predictive of species presence were identified through a series of mixed models. Ice cover and 10 kHz sound level were significant predictors of seal presence, with 40 kHz sound and prey presence (combined with ice cover) as potential predictors as well. Ice seal presence showed a strong positive correlation with ice cover and a negative association with 10 kHz environmental sound. On average, there was a 20–30 dB difference between sound levels during solid ice conditions compared to open water or melting conditions, providing a salient acoustic gradient between open water and solid ice conditions by which ice seals could orient. By constantly assessing the acoustic environment associated with the seasonal ice movement in the Bering Sea, it is possible that ice seals could utilize aspects of the soundscape to gauge their safe distance to open water or the ice edge by orienting in the direction of higher sound levels indicative of open water, especially in the frequency range above 1 kHz. In rapidly changing Arctic and sub-Arctic environments, the seasonal ice conditions and soundscapes are likely to change which may impact the ability of animals using ice presence and cues to successfully function during the winter breeding season. PMID:25229453
Caldwell, Michael S.; Bee, Mark A.
2014-01-01
The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182
Interdecadal change in the deep Puget sound benthos
Nichols, F.H.
2003-01-01
Data from quantitative samples of the benthos at a 200-m site in central Puget Sound, collected twice yearly in most years between 1963 and 1992, were evaluated to determine the extent to which species composition in a continental-shelf depth community exhibits long-term persistence. Study results showed that the most abundant species were consistently present over the 30-year period. However, measures of species composition (e.g., similarity, diversity) reveal a subtle, gradual change in the community over time. Among the changes are (1) multi-year periods of greatly increased abundance of the common species; (2) an overall increase in the total abundance of the benthic community beginning in the mid-1970s; (3) periods of increased abundance, during the late 1970s and early 1980s, of two species that are tolerant of organic enrichment; and (4) the steady decline in abundance of the large burrowing echinoderm, Brisaster latifrons as a consequence of the lack of recruitment to the site since 1970. Despite the conspicuousness of these changes, there are no observed environmental factors that readily explain them. Circumstantial evidence suggests that climate-related change in Puget Sound circulation beginning in the mid-1970s, organic enrichment associated with a nearby large source of primary-treated sewage, and the influence of changes in the abundance of the large echinoderms on the smaller species are potential agents of change. The principle reasons for our inability to identify causes of long-term change in the Puget Sound benthos are (a) inconsistent long-term monitoring of environmental variables, (b) the lack of quantitative information about long-term changes in plankton and fish populations, (c) lack of knowledge of specific predator/prey and competitive interactions in soft bottom benthos, (d) unknown influence of moderate levels of contamination on biota; and (e) lack of understanding of possible linkages between climate regime shifts and fluctuations in local biological populations.
Essays on Environmental Economics and Policy
NASA Astrophysics Data System (ADS)
Walker, W. Reed
A central feature of modern government is its role in designing welfare improving policies to address and correct market failures stemming from externalities and public goods. The rationale for most modern environmental regulations stems from the failure of markets to efficiently allocate goods and services. Yet, as with any policy, distributional effects are important there exist clear winners and losers. Despite the clear theoretical justification for environmental and energy policy, empirical work credibly identifying both the source and consequences of these externalities as well as the distributional effects of existing policies remains in its infancy. My dissertation focuses on the development of empirical methods to investigate the role of environmental and energy policy in addressing market failures as well as exploring the distributional implications of these policies. These questions are important not only as a justification for government intervention into markets but also for understanding how distributional consequences may shape the design and implementation of these policies. My dissertation investigates these questions in the context of programs and policies that are important in their own right. Chapters 1 and 2 of my dissertation explore the economic costs and distributional implications associated with the largest environmental regulatory program in the United States, the Clean Air Act. Chapters 3 and 4 examine the social costs of air pollution in the context of transportation externalities, showing how effective transportation policy has additional co-benefits in the form of environmental policy. My dissertation remains unified in both its subject matter and methodological approach -- using unique sources of data and sound research designs to understand important issues in environmental policy.
Understanding auditory distance estimation by humpback whales: a computational approach.
Mercado, E; Green, S R; Schneider, J N
2008-02-01
Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.
NASA Astrophysics Data System (ADS)
Mironov, M. A.
2011-11-01
A method of allowing for the spatial sound field structure in designing the sound-absorbing structures for turbojet aircraft engine ducts is proposed. The acoustic impedance of a duct should be chosen so as to prevent the reflection of the primary sound field, which is generated by the sound source in the absence of the duct, from the duct walls.
Quantifying the influence of flow asymmetries on glottal sound sources in speech
NASA Astrophysics Data System (ADS)
Erath, Byron; Plesniak, Michael
2008-11-01
Human speech is made possible by the air flow interaction with the vocal folds. During phonation, asymmetries in the glottal flow field may arise from flow phenomena (e.g. the Coanda effect) as well as from pathological vocal fold motion (e.g. unilateral paralysis). In this study, the effects of flow asymmetries on glottal sound sources were investigated. Dynamically-programmable 7.5 times life-size vocal fold models with 2 degrees-of-freedom (linear and rotational) were constructed to provide a first-order approximation of vocal fold motion. Important parameters (Reynolds, Strouhal, and Euler numbers) were scaled to physiological values. Normal and abnormal vocal fold motions were synthesized, and the velocity field and instantaneous transglottal pressure drop were measured. Variability in the glottal jet trajectory necessitated sorting of the data according to the resulting flow configuration. The dipole sound source is related to the transglottal pressure drop via acoustic analogies. Variations in the transglottal pressure drop (and subsequently the dipole sound source) arising from flow asymmetries are discussed.
Psychophysical evidence for auditory motion parallax.
Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz
2018-04-17
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.
Auditory event perception: the source-perception loop for posture in human gait.
Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J
2008-01-01
There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.
Nonlinear theory of shocked sound propagation in a nearly choked duct flow
NASA Technical Reports Server (NTRS)
Myers, M. K.; Callegari, A. J.
1982-01-01
The development of shocks in the sound field propagating through a nearly choked duct flow is analyzed by extending a quasi-one dimensional theory. The theory is applied to the case in which sound is introduced into the flow by an acoustic source located in the vicinity of a near-sonic throat. Analytical solutions for the field are obtained which illustrate the essential features of the nonlinear interaction between sound and flow. Numerical results are presented covering ranges of variation of source strength, throat Mach number, and frequency. It is found that the development of shocks leads to appreciable attenuation of acoustic power transmitted upstream through the near-sonic flow. It is possible, for example, that the power loss in the fundamental harmonic can be as much as 90% of that introduced at the source.
Noise abatement in a pine plantation
R. E. Leonard; L. P. Herrington
1971-01-01
Observations on sound propagation were made in two red pine plantations. Measurements were taken of attenuation of prerecorded frequencies at various distances from the sound source. Sound absorption was strongly dependent on frequencies. Peak absorption was at 500 Hz.
Hearing in three dimensions: Sound localization
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1990-01-01
The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.
77 FR 59611 - Environmental Impacts Statements; Notice of Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
...: Sandy Hurlocker 505-753-7331. EIS No. 20120308, Draft EIS (Tiering), NASA, AK, Sounding Rocket Program (SRP) at Poker Flat Research Range (PFRR), Continuing Sounding Rocket Launches, Alaska, Comment Period...
Jiang, Tinglei; Long, Zhenyu; Ran, Xin; Zhao, Xue; Xu, Fei; Qiu, Fuyuan; Kanwal, Jagmeet S.
2016-01-01
ABSTRACT Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source. PMID:27815241
What the Toadfish Ear Tells the Toadfish Brain About Sound.
Edds-Walton, Peggy L
2016-01-01
Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.
Replacing the Orchestra? – The Discernibility of Sample Library and Live Orchestra Sounds
Wolf, Anna; Platz, Friedrich; Mons, Jan
2016-01-01
Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons. PMID:27382932
The Coast Artillery Journal. Volume 65, Number 4, October 1926
1926-10-01
sound. a. Sound location of airplanes by binaural observation in all antiaircraft regiments. b. Sound ranging on report of enemy guns, together with...Direction finding by binaural observation. [Subparagraphs 30 a and 30 c (l).J This applies to continuous sounds such as pro- pellor noises. b. Point...impacts. 32. The so-called binaural sense is our means of sensing the direc- tion of a sound source. When we hear a sound we judge the approxi- mate
Object localization using a biosonar beam: how opening your mouth improves localization.
Arditi, G; Weiss, A J; Yovel, Y
2015-08-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.
Object localization using a biosonar beam: how opening your mouth improves localization
Arditi, G.; Weiss, A. J.; Yovel, Y.
2015-01-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552
Hemispherical breathing mode speaker using a dielectric elastomer actuator.
Hosoya, Naoki; Baba, Shun; Maeda, Shingo
2015-10-01
Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.
The role of reverberation-related binaural cues in the externalization of speech.
Catic, Jasmina; Santurette, Sébastien; Dau, Torsten
2015-08-01
The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.
Je, Yub; Lee, Haksue; Park, Jongkyu; Moon, Wonkyu
2010-06-01
An ultrasonic radiator is developed to generate a difference frequency sound from two frequencies of ultrasound in air with a parametric array. A design method is proposed for an ultrasonic radiator capable of generating highly directive, high-amplitude ultrasonic sound beams at two different frequencies in air based on a modification of the stepped-plate ultrasonic radiator. The stepped-plate ultrasonic radiator was introduced by Gallego-Juarez et al. [Ultrasonics 16, 267-271 (1978)] in their previous study and can effectively generate highly directive, large-amplitude ultrasonic sounds in air, but only at a single frequency. Because parametric array sources must be able to generate sounds at more than one frequency, a design modification is crucial to the application of a stepped-plate ultrasonic radiator as a parametric array source in air. The aforementioned method was employed to design a parametric radiator for use in air. A prototype of this design was constructed and tested to determine whether it could successfully generate a difference frequency sound with a parametric array. The results confirmed that the proposed single small-area transducer was suitable as a parametric radiator in air.
An open access database for the evaluation of heart sound algorithms.
Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D
2016-12-01
In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.
Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C
2006-03-20
In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.
Beck, Christoph; Garreau, Guillaume; Georgiou, Julius
2016-01-01
Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350
The influence of crowd density on the sound environment of commercial pedestrian streets.
Meng, Qi; Kang, Jian
2015-04-01
Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.
Communicating Earth Science Through Music: The Use of Environmental Sound in Science Outreach
NASA Astrophysics Data System (ADS)
Brenner, C.
2017-12-01
The need for increased public understanding and appreciation of Earth science has taken on growing importance over the last several decades. Human society faces critical environmental challenges, both near-term and future, in areas such as climate change, resource allocation, geohazard threat and the environmental degradation of ecosystems. Science outreach is an essential component to engaging both policymakers and the public in the importance of managing these challenges. However, despite considerable efforts on the part of scientists and outreach experts, many citizens feel that scientific research and methods are both difficult to understand and remote from their everyday experience. As perhaps the most accessible of all art forms, music can provide a pathway through which the public can connect to Earth processes. The Earth is not silent: environmental sound can be sampled and folded into musical compositions, either with or without the additional sounds of conventional or electronic instruments. These compositions can be used in conjunction with other forms of outreach (e.g., as soundtracks for documentary videos or museum installations), or simply stand alone as testament to the beauty of geology and nature. As proof of concept, this presentation will consist of a musical composition that includes sounds from various field recordings of wind, swamps, ice and water (including recordings from the inside of glaciers).
A Review: Characteristics of Noise Absorption Material
NASA Astrophysics Data System (ADS)
Amares, S.; Sujatmika, E.; Hong, T. W.; Durairaj, R.; Hamid, H. S. H. B.
2017-10-01
Noise is always treated as a nuisance to human and even noise pollution appears in the environmental causing discomfort. This also concerns the engineering design that tends to cultivate this noise propagation. Solution such as using material to absorb the sound have been widely used. The fundamental of the sound absorbing propagation, sound absorbing characteristics and its factors are minimally debated. Furthermore, the method in order to pertain sound absorbing related to the sound absorption coefficient is also limited, as many studies only contributes in result basis and very little in literature aspect. This paper revolves in providing better insight on the importance of sound absorption and the materials factors in obtaining the sound absorption coefficient.
Possibilities of psychoacoustics to determine sound quality
NASA Astrophysics Data System (ADS)
Genuit, Klaus
For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.
Wagner, Chad R.; Fitzgerald, Sharon; Antolino, Dominick J.
2015-12-24
The characterization of water-quality and bed-sediment chemistry in Currituck Sound along the proposed alignment of the Mid-Currituck Bridge summarized herein provides a baseline for determining the effect of bridge construction and bridge deck runoff on environmental conditions in Currituck Sound.
Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H
2016-08-01
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.
Spherical harmonic analysis of the sound radiation from omnidirectional loudspeaker arrays
NASA Astrophysics Data System (ADS)
Pasqual, A. M.
2014-09-01
Omnidirectional sound sources are widely used in room acoustics. These devices are made up of loudspeakers mounted on a spherical or polyhedral cabinet, where the dodecahedral shape prevails. Although such electroacoustic sources have been made readily available to acousticians by many manufacturers, an in-depth investigation of their vibroacoustic behavior has not been provided yet. In order to fulfill this lack, this paper presents a theoretical study of the sound radiation from omnidirectional loudspeaker arrays, which is carried out by using a mathematical model based on the spherical harmonic analysis. Eight different loudspeaker arrangements on the sphere are considered: the well-known five Platonic solid layouts and three extremal system layouts. The latter possess useful properties for spherical loudspeaker arrays used as directivity controlled sound sources, so that these layouts are included here in order to investigate whether or not they could be of interest as omnidirectional sources as well. It is shown through a comparative analysis that the dodecahedral array leads to the lowest error in producing an omnidirectional sound field and to the highest acoustic power, which corroborates the prevalence of such a layout. In addition, if a source with less than 12 loudspeakers is required, it is shown that tetrahedra or hexahedra can be used alternatively, whereas the extremal system layouts are not interesting choices for omnidirectional loudspeaker arrays.
The use of an active controlled enclosure to attenuate sound radiation from a heavy radiator
NASA Astrophysics Data System (ADS)
Sun, Yao; Yang, Tiejun; Zhu, Minggang; Pan, Jie
2017-03-01
Active structural acoustical control usually experiences difficulty in the control of heavy sources or sources where direct applications of control forces are not practical. To overcome this difficulty, an active controlled enclosure, which forms a cavity with both flexible and open boundary, is employed. This configuration permits indirect implementation of active control in which the control inputs can be applied to subsidiary structures other than the sources. To determine the control effectiveness of the configuration, the vibro-acoustic behavior of the system, which consists of a top plate with an open, a sound cavity and a source panel, is investigated in this paper. A complete mathematical model of the system is formulated involving modified Fourier series formulations and the governing equations are solved using the Rayleigh-Ritz method. The coupling mechanisms of a partly opened cavity and a plate are analysed in terms of modal responses and directivity patterns. Furthermore, to attenuate sound power radiated from both the top panel and the open, two strategies are studied: minimizing the total radiated power and the cancellation of volume velocity. Moreover, three control configurations are compared, using a point force on the control panel (structural control), using a sound source in the cavity (acoustical control) and applying hybrid structural-acoustical control. In addition, the effects of boundary condition of the control panel on the sound radiation and control performance are discussed.
Aeroacoustic analysis of the human phonation process based on a hybrid acoustic PIV approach
NASA Astrophysics Data System (ADS)
Lodermeyer, Alexander; Tautz, Matthias; Becker, Stefan; Döllinger, Michael; Birk, Veronika; Kniesburges, Stefan
2018-01-01
The detailed analysis of sound generation in human phonation is severely limited as the accessibility to the laryngeal flow region is highly restricted. Consequently, the physical basis of the underlying fluid-structure-acoustic interaction that describes the primary mechanism of sound production is not yet fully understood. Therefore, we propose the implementation of a hybrid acoustic PIV procedure to evaluate aeroacoustic sound generation during voice production within a synthetic larynx model. Focusing on the flow field downstream of synthetic, aerodynamically driven vocal folds, we calculated acoustic source terms based on the velocity fields obtained by time-resolved high-speed PIV applied to the mid-coronal plane. The radiation of these sources into the acoustic far field was numerically simulated and the resulting acoustic pressure was finally compared with experimental microphone measurements. We identified the tonal sound to be generated downstream in a small region close to the vocal folds. The simulation of the sound propagation underestimated the tonal components, whereas the broadband sound was well reproduced. Our results demonstrate the feasibility to locate aeroacoustic sound sources inside a synthetic larynx using a hybrid acoustic PIV approach. Although the technique employs a 2D-limited flow field, it accurately reproduces the basic characteristics of the aeroacoustic field in our larynx model. In future studies, not only the aeroacoustic mechanisms of normal phonation will be assessable, but also the sound generation of voice disorders can be investigated more profoundly.
Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo
2008-06-01
Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.
Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts
ERIC Educational Resources Information Center
Delalande, Francois; Cornara, Silvia
2010-01-01
One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…
Monitoring the Ocean Using High Frequency Ambient Sound
2008-10-01
even identify specific groups within the resident killer whale type ( Puget Sound Southern Resident pods J, K and L) because these groups have...particular, the different populations of killer whales in the NE Pacific Ocean. This has been accomplished by detecting transient sounds with short...high sea state (the sound of spray), general shipping - close and distant, clanking and whale calls and clicking. These sound sources form the basis
NASA Astrophysics Data System (ADS)
Rennoll, V.
2016-02-01
The National Centers for Environmental Information provide public access to a wealth of seafloor mapping data, both from National Ocean Service hydrographic surveys and outside source collections. Utilizing the outside source data to improve nautical charts created by the National Oceanic and Atmospheric Administration (NOAA) is an appealing alternative to traditional surveys, largely in areas with significant data gaps where hydrographic surveys are not planned. However, much of the outside data are collected in transit lines and lack traditional overlapping main scheme lines and crosslines. Spanning multiple years and vessels, these transit line data collections were obtained using disparate operating procedures and have inconsistent qualities. Here, a workflow was developed to ingest these variable depth data within a defined region by assessing their quality and utility for nautical charting. The workflow was evaluated with a navigationally significant area in the Bering Sea, where bathymetric data collected from ten vessels over a period of twelve years were available. The outside data were shown to be of sufficient quality through comparisons with existing NOAA surveys and then used to demonstrate where the data could provide new or updated information on nautical charts, and provide reconnaissance for future hydrographic planning. The utility assessment of the data, however, was hindered by lack of a verified survey-scale sounding database, against which the outside source data could be compared. Having developed the workflow, it is recommended that further outside data is ingested by NOAA's Office of Coast Survey and that a database is developed with full-scale chart soundings for outside data comparisons.
2007-03-29
Development of An Empirical Water Quality Model for Stormwater Based on Watershed Land Use in Puget Sound Valerie I. Cullinan, Christopher W. May...Systems Center, Bremerton, WA) Introduction The Sinclair and Dyes Inlet watershed is located on the west side of Puget Sound in Kitsap County...Washington, U.S.A. (Figure 1). The Puget Sound Naval Shipyard (PSNS), U.S Environmental Protection Agency (USEPA), the Washington State Department of
Meteorological effects on long-range outdoor sound propagation
NASA Technical Reports Server (NTRS)
Klug, Helmut
1990-01-01
Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.
The Problems with "Noise Numbers" for Wind Farm Noise Assessment
ERIC Educational Resources Information Center
Thorne, Bob
2011-01-01
Human perception responds primarily to sound character rather than sound level. Wind farms are unique sound sources and exhibit special audible and inaudible characteristics that can be described as modulating sound or as a tonal complex. Wind farm compliance measures based on a specified noise number alone will fail to address problems with noise…
Exterior sound level measurements of snowcoaches at Yellowstone National Park
DOT National Transportation Integrated Search
2010-04-01
Sounds associated with oversnow vehicles, such as snowmobiles and snowcoaches, are an important management concern at Yellowstone and Grand Teton National Parks. The John A. Volpe National Transportation Systems Centers Environmental Measurement a...
Notification: Review of Puget Sound Action Agenda Grants
Project #OA-FY13-0341, June 26, 2013. The U.S. Environmental Protection Agency, Office of Inspector General, plans to begin preliminary research for an audit of Puget Sound Action Agenda grants in July 2013.
Annoyance from industrial noise: indicators for a wide variety of industrial sources.
Alayrac, M; Marquis-Favre, C; Viollon, S; Morel, J; Le Nost, G
2010-09-01
In the study of noises generated by industrial sources, one issue is the variety of industrial noise sources and consequently the complexity of noises generated. Therefore, characterizing the environmental impact of an industrial plant requires better understanding of the noise annoyance caused by industrial noise sources. To deal with the variety of industrial sources, the proposed approach is set up by type of spectral features and based on a perceptive typology of steady and permanent industrial noises comprising six categories. For each perceptive category, listening tests based on acoustical factors are performed on noise annoyance. Various indicators are necessary to predict noise annoyance due to various industrial noise sources. Depending on the spectral features of the industrial noise sources, noise annoyance indicators are thus assessed. In case of industrial noise sources without main spectral features such as broadband noise, noise annoyance is predicted by the A-weighted sound pressure level L(Aeq) or the loudness level L(N). For industrial noises with spectral components such as low-frequency noises with a main component at 100 Hz or noises with spectral components in middle frequencies, indicators are proposed here that allow good prediction of noise annoyance by taking into account spectral features.
Status and problems of fusion reactor development.
Schumacher, U
2001-03-01
Thermonuclear fusion of deuterium and tritium constitutes an enormous potential for a safe, environmentally compatible and sustainable energy supply. The fuel source is practically inexhaustible. Further, the safety prospects of a fusion reactor are quite favourable due to the inherently self-limiting fusion process, the limited radiologic toxicity and the passive cooling property. Among a small number of approaches, the concept of toroidal magnetic confinement of fusion plasmas has achieved most impressive scientific and technical progress towards energy release by thermonuclear burn of deuterium-tritium fuels. The status of thermonuclear fusion research activity world-wide is reviewed and present solutions to the complicated physical and technological problems are presented. These problems comprise plasma heating, confinement and exhaust of energy and particles, plasma stability, alpha particle heating, fusion reactor materials, reactor safety and environmental compatibility. The results and the high scientific level of this international research activity provide a sound basis for the realisation of the International Thermonuclear Experimental Reactor (ITER), whose goal is to demonstrate the scientific and technological feasibility of a fusion energy source for peaceful purposes.
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
Auditory Task Irrelevance: A Basis for Inattentional Deafness
Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.
2018-01-01
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754
Dale J. Blahna; Aaron Poe; Courtney Brown; Clare M. Ryan; H. Randy Gimblett
2017-01-01
Following the grounding of the Exxon Valdez in 1989, a sustainable human use framework (human use framework) for Prince William Sound (PWS), AK was developed by the Chugach National Forest after concerns emerged about the social and environmental impacts of expanding human use due to cleanup activities and increased recreation visitation. A practical, issue-based...
Climatic factors related to land-use planning in the Puget Sound basin, Washington
Foxworthy, B.L.; Richardson, Donald
1973-01-01
The purpose of this study is to review available data related to the climate of the Puget Sound basin and to present selected climatic information along with an evaluation of its significance and general adequacy for planning purposes. This is part of continuing efforts aimed at imporving the accessibility and usefulness of environmental and other data needed for land-use planning, resource development, and environmental protection.
The influence of musical experience on lateralisation of auditory processing.
Spajdel, Marián; Jariabková, Katarína; Riecanský, Igor
2007-11-01
The influence of musical experience on free-recall dichotic listening to environmental sounds, two-tone sequences, and consonant-vowel (CV) syllables was investigated. A total of 60 healthy right-handed participants were divided into two groups according to their active musical competence ("musicians" and "non-musicians"). In both groups, we found a left ear advantage (LEA) for nonverbal stimuli (environmental sounds and two-tone sequences) and a right ear advantage (REA) for CV syllables. Dichotic listening to environmental sounds was uninfluenced by musical experience. The total accuracy of recall for two-tone sequences was higher in musicians than in non-musicians but the lateralisation was similar in both groups. For CV syllables a lower REA was found in male but not female musicians in comparison to non-musicians. The results indicate a specific sex-dependent effect of musical experience on lateralisation of phonological auditory processing.
Spatial sound field synthesis and upmixing based on the equivalent source method.
Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang
2014-01-01
Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.
Optimal wavelet denoising for smart biomonitor systems
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-03-01
Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)
2000-01-01
In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).
Perceptual assessment of quality of urban soundscapes with combined noise sources and water sounds.
Jeon, Jin Yong; Lee, Pyoung Jik; You, Jin; Kang, Jian
2010-03-01
In this study, urban soundscapes containing combined noise sources were evaluated through field surveys and laboratory experiments. The effect of water sounds on masking urban noises was then examined in order to enhance the soundscape perception. Field surveys in 16 urban spaces were conducted through soundwalking to evaluate the annoyance of combined noise sources. Synthesis curves were derived for the relationships between noise levels and the percentage of highly annoyed (%HA) and the percentage of annoyed (%A) for the combined noise sources. Qualitative analysis was also made using semantic scales for evaluating the quality of the soundscape, and it was shown that the perception of acoustic comfort and loudness was strongly related to the annoyance. A laboratory auditory experiment was then conducted in order to quantify the total annoyance caused by road traffic noise and four types of construction noise. It was shown that the annoyance ratings were related to the types of construction noise in combination with road traffic noise and the level of the road traffic noise. Finally, water sounds were determined to be the best sounds to use for enhancing the urban soundscape. The level of the water sounds should be similar to or not less than 3 dB below the level of the urban noises.
Sprague, Mark W; Luczkovich, Joseph J
2016-01-01
This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
NASA Astrophysics Data System (ADS)
Miner, Nadine Elizabeth
1998-09-01
This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.
Aircraft laser sensing of sound velocity in water - Brillouin scattering
NASA Technical Reports Server (NTRS)
Hickman, G. D.; Harding, John M.; Carnes, Michael; Pressman, AL; Kattawar, George W.; Fry, Edward S.
1991-01-01
A real-time data source for sound speed in the upper 100 m has been proposed for exploratory development. This data source is planned to be generated via a ship- or aircraft-mounted optical pulsed laser using the spontaneous Brillouin scattering technique. The system should be capable (from a single 10 ns 500 mJ pulse) of yielding range resolved sound speed profiles in water to depths of 75-100 m to an accuracy of 1 m/s. The 100 m profiles will provide the capability of rapidly monitoring the upper-ocean vertical structure. They will also provide an extensive, subsurface-data source for existing real-time, operational ocean nowcast/forecast systems.
Exterior sound level measurements of over-snow vehicles at Yellowstone National Park.
DOT National Transportation Integrated Search
2008-09-30
Sounds associated with oversnow vehicles, such as snowmobiles and snowcoaches, are an : important management concern at Yellowstone and Grand Teton National Parks. The John A. : Volpe National Transportation Systems Centers Environmental Measureme...
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-09-01
BEETIT Project: Penn State is designing a freezer that substitutes the use of sound waves and environmentally benign refrigerant for synthetic refrigerants found in conventional freezers. Called a thermoacoustic chiller, the technology is based on the fact that the pressure oscillations in a sound wave result in temperature changes. Areas of higher pressure raise temperatures and areas of low pressure decrease temperatures. By carefully arranging a series of heat exchangers in a sound field, the chiller is able to isolate the hot and cold regions of the sound waves. Penn State’s chiller uses helium gas to replace synthetic refrigerants. Becausemore » helium does not burn, explode or combine with other chemicals, it is an environmentally-friendly alternative to other polluting refrigerants. Penn State is working to apply this technology on a large scale.« less
Constructing "sound science" and "good epidemiology": tobacco, lawyers, and public relations firms.
Ong, E K; Glantz, S A
2001-11-01
The tobacco industry has attacked "junk science" to discredit the evidence that secondhand smoke-among other environmental toxins-causes disease. Philip Morris used public relations firms and lawyers to develop a "sound science" program in the United States and Europe that involved recruiting other industries and issues to obscure the tobacco industry's role. The European "sound science" plans included a version of "good epidemiological practices" that would make it impossible to conclude that secondhand smoke-and thus other environmental toxins-caused diseases. Public health professionals need to be aware that the "sound science" movement is not an indigenous effort from within the profession to improve the quality of scientific discourse, but reflects sophisticated public relations campaigns controlled by industry executives and lawyers whose aim is to manipulate the standards of scientific proof to serve the corporate interests of their clients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-06-01
This project constitutes Phase 2 of the Sound Waste Management Plan and created waste oil collection and disposal facilities, bilge water collection and disposal facilities, recycling storage, and household hazardous waste collection and storage, and household hazardous waste collection and storage facilities in Prince William Sound. A wide range of waste streams are generated within communities in the Sound including used oil generated from vehicles and vessels, and hazardous wastes generated by households. This project included the design and construction of Environmental Operations Stations buildings in Valdez, Cordova, Whittier, Chenega Bay and Tatitlek to improve the overall management of oilymore » wastes. They will house new equipment to facilitate oily waste collection, treatment and disposal. This project also included completion of used oil management manuals.« less
Commencement Bay Cumulative Impact Study: Historic Review of Special Aquatic Sites
1991-05-04
is generally defined as a geographic region of south Puget Sound in Washington State extending from Brown’s Point to Point Defiance. (Figure-10. it...amount of sediment load. 2 2 Area enlarged Commencement Bay Cumulative Impacts Study (U Puget Sound 0 0 3.0 600E,0 Point) Figureat 1. Study AreaMa...the Puget Sound Environmental Atlas was produced under funding from the Seattle District Corps of Engineers, EPA, and the Puget Sound Water Quality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Page, D.S.; Boehm, P.D.; Douglas, G.S.
Advanced hydrocarbon fingerprinting methods and improved analytical methods make possible the quantitative discrimination of the multiple sources of hydrocarbons in the benthic sediments of Prince William Sound (PWS) and the Gulf of Alaska. These methods measure an extensive range of polycyclic aromatic hydrocarbons (PAH) at detection levels that are as much as two orders of magnitude lower than those obtained by standard Environmental Protection Agency methods. Nineteen hundred thirty six subtidal sediment samples collected in the sound and the eastern Gulf of Alaska in 1989, 1990, and 1991 were analyzed. Fingerprint analyses of gas chromatography-mass spectrometry data reveal a naturalmore » background of petrogenic and biogenic PAH. Exxon Valdez crude, its weathering products, and diesel fuel refined from Alaska North Slope crude are readily distinguished from the natural seep petroleum background and from each other because of their distinctive PAH distributions. Mixing models were developed to calculate the PAH contributions from each source to each sediment sample. These calculations show that most of the seafloor in PWS contains no detectable hydrocarbons from the Exxon Valdez spill, although elevated concentrations of PAH from seep sources are widespread. In those areas where they were detected, spill hydrocarbons were generally a small increment to the natural petroleum hydrocarbon background. Low levels of Exxon Valdez crude residue were present in 1989 and again in 1990 in nearshore subtidal sediments off some shorelines that had been heavily oiled. By 1991 these crude residues were heavily degraded and even more sporadically distributed. 58 refs., 18 figs., 5 tabs.« less
Wang, Zhitao; Wu, Yuping; Duan, Guoqin; Cao, Hanjiang; Liu, Jianchang; Wang, Kexiong; Wang, Ding
2014-01-01
Anthropogenic noise in aquatic environments is a worldwide concern due to its potential adverse effects on the environment and aquatic life. The Hongkong-Zhuhai-Macao Bridge is currently under construction in the Pearl River Estuary, a hot spot for the Indo-Pacific humpbacked dolphin (Sousa chinensis) in China. The OCTA-KONG, the world's largest vibration hammer, is being used during this construction project to drive or extract steel shell piles 22 m in diameter. This activity poses a substantial threat to marine mammals, and an environmental assessment is critically needed. The underwater acoustic properties of the OCTA-KONG were analyzed, and the potential impacts of the underwater acoustic energy on Sousa, including auditory masking and physiological impacts, were assessed. The fundamental frequency of the OCTA-KONG vibration ranged from 15 Hz to 16 Hz, and the noise increments were below 20 kHz, with a dominant frequency and energy below 10 kHz. The resulting sounds are most likely detectable by Sousa over distances of up to 3.5 km from the source. Although Sousa clicks do not appear to be adversely affected, Sousa whistles are susceptible to auditory masking, which may negatively impact this species' social life. Therefore, a safety zone with a radius of 500 m is proposed. Although the zero-to-peak source level (SL) of the OCTA-KONG was lower than the physiological damage level, the maximum root-mean-square SL exceeded the cetacean safety exposure level on several occasions. Moreover, the majority of the unweighted cumulative source sound exposure levels (SSELs) and the cetacean auditory weighted cumulative SSELs exceeded the acoustic threshold levels for the onset of temporary threshold shift, a type of potentially recoverable auditory damage resulting from prolonged sound exposure. These findings may aid in the identification and design of appropriate mitigation methods, such as the use of air bubble curtains, “soft start” and “power down” techniques. PMID:25338113
Wang, Zhitao; Wu, Yuping; Duan, Guoqin; Cao, Hanjiang; Liu, Jianchang; Wang, Kexiong; Wang, Ding
2014-01-01
Anthropogenic noise in aquatic environments is a worldwide concern due to its potential adverse effects on the environment and aquatic life. The Hongkong-Zhuhai-Macao Bridge is currently under construction in the Pearl River Estuary, a hot spot for the Indo-Pacific humpbacked dolphin (Sousa chinensis) in China. The OCTA-KONG, the world's largest vibration hammer, is being used during this construction project to drive or extract steel shell piles 22 m in diameter. This activity poses a substantial threat to marine mammals, and an environmental assessment is critically needed. The underwater acoustic properties of the OCTA-KONG were analyzed, and the potential impacts of the underwater acoustic energy on Sousa, including auditory masking and physiological impacts, were assessed. The fundamental frequency of the OCTA-KONG vibration ranged from 15 Hz to 16 Hz, and the noise increments were below 20 kHz, with a dominant frequency and energy below 10 kHz. The resulting sounds are most likely detectable by Sousa over distances of up to 3.5 km from the source. Although Sousa clicks do not appear to be adversely affected, Sousa whistles are susceptible to auditory masking, which may negatively impact this species' social life. Therefore, a safety zone with a radius of 500 m is proposed. Although the zero-to-peak source level (SL) of the OCTA-KONG was lower than the physiological damage level, the maximum root-mean-square SL exceeded the cetacean safety exposure level on several occasions. Moreover, the majority of the unweighted cumulative source sound exposure levels (SSELs) and the cetacean auditory weighted cumulative SSELs exceeded the acoustic threshold levels for the onset of temporary threshold shift, a type of potentially recoverable auditory damage resulting from prolonged sound exposure. These findings may aid in the identification and design of appropriate mitigation methods, such as the use of air bubble curtains, "soft start" and "power down" techniques.
McClaine, Elizabeth M.; Yin, Tom C. T.
2010-01-01
The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848
Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T
2010-01-01
The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.
McIDAS-V: A Data Analysis and Visualization Tool for Global Satellite Data
NASA Astrophysics Data System (ADS)
Achtor, T. H.; Rink, T. D.
2011-12-01
The Man-computer Interactive Data Access System (McIDAS-V) is a java-based, open-source, freely available system for scientists, researchers and algorithm developers working with atmospheric data. The McIDAS-V software tools provide powerful new data manipulation and visualization capabilities, including 4-dimensional displays, an abstract data model with integrated metadata, user defined computation, and a powerful scripting capability. As such, McIDAS-V is a valuable tool for scientists and researchers within the GEO and GOESS domains. The advancing polar and geostationary orbit environmental satellite missions conducted by several countries will carry advanced instrumentation and systems that will collect and distribute land, ocean, and atmosphere data. These systems provide atmospheric and sea surface temperatures, humidity sounding, cloud and aerosol properties, and numerous other environmental products. This presentation will display and demonstrate some of the capabilities of McIDAS-V to analyze and display high temporal and spectral resolution data using examples from international environmental satellites.
Investigation of hydraulic transmission noise sources
NASA Astrophysics Data System (ADS)
Klop, Richard J.
Advanced hydrostatic transmissions and hydraulic hybrids show potential in new market segments such as commercial vehicles and passenger cars. Such new applications regard low noise generation as a high priority, thus, demanding new quiet hydrostatic transmission designs. In this thesis, the aim is to investigate noise sources of hydrostatic transmissions to discover strategies for designing compact and quiet solutions. A model has been developed to capture the interaction of a pump and motor working in a hydrostatic transmission and to predict overall noise sources. This model allows a designer to compare noise sources for various configurations and to design compact and inherently quiet solutions. The model describes dynamics of the system by coupling lumped parameter pump and motor models with a one-dimensional unsteady compressible transmission line model. The model has been verified with dynamic pressure measurements in the line over a wide operating range for several system structures. Simulation studies were performed illustrating sensitivities of several design variables and the potential of the model to design transmissions with minimal noise sources. A semi-anechoic chamber has been designed and constructed suitable for sound intensity measurements that can be used to derive sound power. Measurements proved the potential to reduce audible noise by predicting and reducing both noise sources. Sound power measurements were conducted on a series hybrid transmission test bench to validate the model and compare predicted noise sources with sound power.
On the Possible Detection of Lightning Storms by Elephants
Kelley, Michael C.; Garstang, Michael
2013-01-01
Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406
Noise levels in neonatal intensive care unit and use of sound absorbing panel in the isolette.
Altuncu, E; Akman, I; Kulekci, S; Akdas, F; Bilgen, H; Ozek, E
2009-07-01
The purposes of this study were to measure the noise level of a busy neonatal intensive care unit (NICU) and to determine the effect of sound absorbing panel (SAP) on the level of noise inside the isolette. The sound pressure levels (SPL) of background noise, baby crying, alarms and closing of isolette's door/portholes were measured by a 2235-Brüel&Kjaer Sound Level Meter. Readings were repeated after applying SAP (3D pyramidal shaped open cell polyurethane foam) to the three lateral walls and ceiling of the isolette. The median SPL of background noise inside the NICU was 56dBA and it decreased to 47dBA inside the isolette. The median SPL of monitor alarms and baby crying inside the isolette were not different than SPL measured under radiant warmer (p>0.05). With SAP, the median SPL of temperature alarm inside the isolette decreased significantly from 82 to 72dBA, monitor alarm from 64 to 56dBA, porthole closing from 81 to 74dBA, and isolette door closing from 80 to 68dBA (p<0.01). There was a significant reduction in the noise produced by baby crying when SAP was used in the isolette (79dBA vs 69dBA, respectively) (p<0.0001). There was also significant attenuation effect of panel on the environmental noise. The noise level in our NICU is significantly above the universally recommended levels. Being inside the isolette protects infants from noise sources produced outside the isolette. However, very high noises are produced inside the isolette as well. Sound absorbing panel can be a simple solution and it attenuated the noise levels inside the isolette.
Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia
NASA Astrophysics Data System (ADS)
Gedamke, Jason
An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and associated contextual data of recorded sounds were analyzed. Two categories of sound are described here: (1) patterned song, which was regularly repeated in one of three patterns: slow, fast, and rapid-clustered repetition, and (2) non-patterned "social" sounds recorded from gregarious assemblages of whales. These discrete acoustic signals may comprise a graded system of communication (Slow/fast song → Rapid-clustered song → Social sounds) that is related to the spacing between whales.
Investigation of the sound generation mechanisms for in-duct orifice plates.
Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning
2017-08-01
Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.
The Confirmation of the Inverse Square Law Using Diffraction Gratings
ERIC Educational Resources Information Center
Papacosta, Pangratios; Linscheid, Nathan
2014-01-01
Understanding the inverse square law, how for example the intensity of light or sound varies with distance, presents conceptual and mathematical challenges. Students know intuitively that intensity decreases with distance. A light source appears dimmer and sound gets fainter as the distance from the source increases. The difficulty is in…
An experimental investigation of thermoacoustic lasers operating in audible frequency range
NASA Astrophysics Data System (ADS)
Kolhe, Sanket Anil
Thermoacoustic lasers convert heat from a high-temperature heat source into acoustic power while rejecting waste heat to a low temperature sink. The working fluids involved can be air or noble gases which are nontoxic and environmentally benign. Simple in construction due to absence of moving parts, thermoacoustic lasers can be employed to achieve generation of electricity at individual homes, water-heating for domestic purposes, and to facilitate space heating and cooling. The possibility of utilizing waste heat or solar energy to run thermoacoustic devices makes them technically promising and economically viable to generate large quantities of acoustic energy. The research presented in this thesis deals with the effects of geometric parameters (stack position, stack length, tube length) associated with a thermoacoustic laser on the output sound wave. The effects of varying input power on acoustic output were also studied. Based on the experiments, optimum operating conditions were identified and qualitative and/or quantitative explanations were provided to justify our observations. It was observed that the maximum sound pressure level was generated for the laser with the stack positioned at a distance of quarter lengths of a resonator from the closed end. Higher sound pressure levels were recorded for the laser with longer stack lengths and longer resonator lengths. Efforts were also made to develop high-frequency thermoacoustic lasers.
Beranek, Leo
2011-05-01
The parameter, "Strength of Sound G" is closely related to loudness. Its magnitude is dependent, inversely, on the total sound absorption in a room. By comparison, the reverberation time (RT) is both inversely related to the total sound absorption in a hall and directly related to its cubic volume. Hence, G and RT in combination are vital in planning the acoustics of a concert hall. A newly proposed "Bass Index" is related to the loudness of the bass sound and equals the value of G at 125 Hz in decibels minus its value at mid-frequencies. Listener envelopment (LEV) is shown for most halls to be directly related to the mid-frequency value of G. The broadening of sound, i.e., apparent source width (ASW) is given by degree of source broadening (DSB) which is determined from the combined effect of early lateral reflections as measured by binaural quality index (BQI) and strength G. The optimum values and limits of these parameters are discussed.
The role of long-term familiarity and attentional maintenance in short-term memory for timbre.
Siedenburg, Kai; McAdams, Stephen
2017-04-01
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.
Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.
2017-01-01
Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283
Clark, Christopher James
2014-01-01
Models of character evolution often assume a single mode of evolutionary change, such as continuous, or discrete. Here I provide an example in which a character exhibits both types of change. Hummingbirds in the genus Selasphorus produce sound with fluttering tail-feathers during courtship. The ancestral character state within Selasphorus is production of sound with an inner tail-feather, R2, in which the sound usually evolves gradually. Calliope and Allen's Hummingbirds have evolved autapomorphic acoustic mechanisms that involve feather-feather interactions. I develop a source-filter model of these interactions. The ‘source’ comprises feather(s) that are both necessary and sufficient for sound production, and are aerodynamically coupled to neighboring feathers, which act as filters. Filters are unnecessary or insufficient for sound production, but may evolve to become sources. Allen's Hummingbird has evolved to produce sound with two sources, one with feather R3, another frequency-modulated sound with R4, and their interaction frequencies. Allen's R2 retains the ancestral character state, a ∼1 kHz “ghost” fundamental frequency masked by R3, which is revealed when R3 is experimentally removed. In the ancestor to Allen's Hummingbird, the dominant frequency has ‘hopped’ to the second harmonic without passing through intermediate frequencies. This demonstrates that although the fundamental frequency of a communication sound may usually evolve gradually, occasional jumps from one character state to another can occur in a discrete fashion. Accordingly, mapping acoustic characters on a phylogeny may produce misleading results if the physical mechanism of production is not known. PMID:24722049
The EPA Agriculture Resource Directory offers comprehensive, easy-to-understand information about environmental stewardship on farms and ranches; commonsense, flexible approaches that are both environmentally protective and agriculturally sound.
Sound produced by an oscillating arc in a high-pressure gas
NASA Astrophysics Data System (ADS)
Popov, Fedor K.; Shneider, Mikhail N.
2017-08-01
We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.
Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kun; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang
2015-01-14
Based on the Huygens-Fresnel principle, we design a planar lens to efficiently realize the interconversion between the point-like sound source and Gaussian beam in ambient air. The lens is constructed by a planar plate perforated elaborately with a nonuniform array of zigzag slits, where the slit exits act as subwavelength-sized secondary sources carrying desired sound responses. The experiments operated at audible regime agree well with the theoretical predictions. This compact device could be useful in daily life applications, such as for medical and detection purposes.
High-frequency monopole sound source for anechoic chamber qualification
NASA Astrophysics Data System (ADS)
Saussus, Patrick; Cunefare, Kenneth A.
2003-04-01
Anechoic chamber qualification procedures require the use of an omnidirectional monopole sound source. Required characteristics for these monopole sources are explicitly listed in ISO 3745. Building a high-frequency monopole source that meets these characteristics has proved difficult due to the size limitations imposed by small wavelengths at high frequency. A prototype design developed for use in hemianechoic chambers employs telescoping tubes, which act as an inverse horn. This same design can be used in anechoic chambers, with minor adaptations. A series of gradually decreasing brass telescoping tubes is attached to the throat of a well-insulated high-frequency compression driver. Therefore, all of the sound emitted from the driver travels through the horn and exits through an opening of approximately 2.5 mm. Directivity test data show that this design meets all of the requirements set forth by ISO 3745.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, N.W.T.
2009-10-15
Many perceive the implementation of environmental regulatory policy, especially concerning non-point source pollution from irrigated agriculture, as being less efficient in the United States than in many other countries. This is partly a result of the stakeholder involvement process but is also a reflection of the inability to make effective use of Environmental Decision Support Systems (EDSS) to facilitate technical information exchange with stakeholders and to provide a forum for innovative ideas for controlling non-point source pollutant loading. This paper describes one of the success stories where a standardized Environmental Protection Agency (EPA) methodology was modified to better suit regulationmore » of a trace element in agricultural subsurface drainage and information technology was developed to help guide stakeholders, provide assurances to the public and encourage innovation while improving compliance with State water quality objectives. The geographic focus of the paper is the western San Joaquin Valley where, in 1985, evapoconcentration of selenium in agricultural subsurface drainage water, diverted into large ponds within a federal wildlife refuge, caused teratogenecity in waterfowl embryos and in other sensitive wildlife species. The fallout from this environmental disaster was a concerted attempt by State and Federal water agencies to regulate non-point source loads of the trace element selenium. The complexity of selenium hydrogeochemistry, the difficulty and expense of selenium concentration monitoring and political discord between agricultural and environmental interests created challenges to the regulation process. Innovative policy and institutional constructs, supported by environmental monitoring and the web-based data management and dissemination systems, provided essential decision support, created opportunities for adaptive management and ultimately contributed to project success. The paper provides a retrospective on the contentious planning process and offers suggestions as to how the technical and institutional issues could have been resolved faster through early adoption of some of the core principles of sound EDSS design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Four years after its occurrence rocked the petroleum industry and revitalized the US environmental movement, the Exxon Valdez tanker oil spill off Alaska continues to stir controversy. Conflicting reports abound over whether there is long term damage to the Prince William Sound ecosystem resulting from the March 24, 1989, spill. Government scientists at recent conferences disclosed studies they contend show long term, significant damage to the sound. Exxon this month launched a counteroffensive, disclosing results of studies it funded that it claims show no credible scientific evidence of long term damage. At the same time, the company blasted as flawedmore » the government's data on assessing environmental damage to the sound and charged that test samples from the sound were mishandled. Meantime, Prince William Sound still shows lingering effects from the Exxon Valdez oil spill. But recovery has been so rapid that there is more controversy over how to use $900 million in natural resource recovery funds that Exxon paid than over how badly species are suffering. The paper describes Exxon's studies; faulty data; lingering damage; and an update on tanker safety.« less
Sigmundsson, Hermundur; Eriksen, Adrian D.; Ofteland, Greta Storm; Haga, Monika
2017-01-01
This study explored whether there is a gender difference in letter-sound knowledge when children start at school. 485 children aged 5–6 years completed assessment of letter-sound knowledge, i.e., large letters; sound of large letters; small letters; sound of small letters. The findings indicate a significant difference between girls and boys in all four factors tested in this study in favor of the girls. There are still no clear explanations to the basis of a presumed gender difference in letter-sound knowledge. That the findings have origin in neuro-biological factors cannot be excluded, however, the fact that girls probably have been exposed to more language experience/stimulation compared to boys, lends support to explanations derived from environmental aspects. PMID:28951726
Ooishi, Yuuki
2018-01-01
A sound-induced sympathetic tone has been used as an index for orienting responses to auditory stimuli. The resting testosterone/cortisol ratio is a biomarker of social aggression that drives an approaching behavior in response to environmental stimuli, and a higher testosterone level and a lower cortisol level can facilitate the sympathetic response to environmental stimuli. Therefore, it is possible that the testosterone/cortisol ratio is correlated with the sound-induced sympathetic tone. The current study investigated the relationship between the resting testosterone/cortisol ratio and vasoconstriction induced by listening to sound stimuli. Twenty healthy males aged 29.0 ± 0.53 years (mean ± S.E.M) participated in the study. They came to the laboratory for 3 days and listened to one of three types of sound stimuli for 1 min on each day. Saliva samples were collected for an analysis of salivary testosterone and cortisol levels on the day of each experiment. After the collecting the saliva sample, we measured the blood volume pulse (BVP) amplitude at a fingertip. Since vasoconstriction is mediated by the activation of the sympathetic nerves, the strength of the reduction in BVP amplitude at a fingertip was called the BVP response (finger BVPR). No difference was observed between the sound-induced finger BVPR for the three types of sound stimuli (p = 0.779). The correlation coefficient between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio within participants was significantly different from no correlation (p = 0.011) and there was a trend toward a significance in the correlation between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio between participants (r = 0.39, p = 0.088). These results suggest that the testosterone/cortisol ratio affects the difference in the sound-evoked sympathetic response. PMID:29559922
Ooishi, Yuuki
2018-01-01
A sound-induced sympathetic tone has been used as an index for orienting responses to auditory stimuli. The resting testosterone/cortisol ratio is a biomarker of social aggression that drives an approaching behavior in response to environmental stimuli, and a higher testosterone level and a lower cortisol level can facilitate the sympathetic response to environmental stimuli. Therefore, it is possible that the testosterone/cortisol ratio is correlated with the sound-induced sympathetic tone. The current study investigated the relationship between the resting testosterone/cortisol ratio and vasoconstriction induced by listening to sound stimuli. Twenty healthy males aged 29.0 ± 0.53 years (mean ± S.E.M) participated in the study. They came to the laboratory for 3 days and listened to one of three types of sound stimuli for 1 min on each day. Saliva samples were collected for an analysis of salivary testosterone and cortisol levels on the day of each experiment. After the collecting the saliva sample, we measured the blood volume pulse (BVP) amplitude at a fingertip. Since vasoconstriction is mediated by the activation of the sympathetic nerves, the strength of the reduction in BVP amplitude at a fingertip was called the BVP response (finger BVPR). No difference was observed between the sound-induced finger BVPR for the three types of sound stimuli ( p = 0.779). The correlation coefficient between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio within participants was significantly different from no correlation ( p = 0.011) and there was a trend toward a significance in the correlation between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio between participants ( r = 0.39, p = 0.088). These results suggest that the testosterone/cortisol ratio affects the difference in the sound-evoked sympathetic response.
ENVIRONMENTAL MONITORING FOR PUBLIC ACCESS AND COMMUNITY TRACKING, EMPACT
This project seeks to apply sound science to the collection and presentation of environmental data on Human Exposure to the public so that the public can make informed decisions regarding activities that would affect their exposure to environmental pollutants. The Environmental ...
Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-03-01
High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.
Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-01-01
Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706
Statistics of natural binaural sounds.
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
Statistics of Natural Binaural Sounds
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658
Electrophysiological correlates of cocktail-party listening.
Lewald, Jörg; Getzmann, Stephan
2015-10-01
Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.
Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.
Prospathopoulos, John M; Voutsinas, Spyros G
2007-09-01
The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.
Acoustic centering of sources measured by surrounding spherical microphone arrays.
Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz
2011-10-01
The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Li, Xuebao; Cui, Xiang; Lu, Tiebing; Wang, Donglai
2017-10-01
The directivity and lateral profile of corona-generated audible noise (AN) from a single corona source are measured through experiments carried out in the semi-anechoic laboratory. The experimental results show that the waveform of corona-generated AN consists of a series of random sound pressure pulses whose pulse amplitudes decrease with the increase of measurement distance. A single corona source can be regarded as a non-directional AN source, and the A-weighted SPL (sound pressure level) decreases 6 dB(A) as doubling the measurement distance. Then, qualitative explanations for the rationality of treating the single corona source as a point source are given on the basis of the Ingard's theory for sound generation in corona discharge. Furthermore, we take into consideration of the ground reflection and the air attenuation to reconstruct the propagation features of AN from the single corona source. The calculated results agree with the measurement well, which validates the propagation model. Finally, the influence of the ground reflection on the SPL is presented in the paper.
A Space Based Internet Protocol System for Launch Vehicle Tracking and Control
NASA Technical Reports Server (NTRS)
Bull, Barton; Grant, Charles; Morgan, Dwayne; Streich, Ron; Bauer, Frank (Technical Monitor)
2001-01-01
Personnel from the Goddard Space Flight Center Wallops Flight Facility (GSFC/WFF) in Virginia are responsible for the overall management of the NASA Sounding Rocket and Scientific Balloon Programs. Payloads are generally in support of NASA's Space Science Enterprise's missions and return a variety of scientific data as well as providing a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft. Sounding rockets used by NASA can carry payloads of various weights to altitudes from 50 km to more than 1,300 km. Scientific balloons can carry a payload weighing as much as 3,630 Kg to an altitude of 42 km. Launch activities for both are conducted not only from established ranges, but also from remote locations worldwide requiring mobile tracking and command equipment to be transported and set up at considerable expense. The advent of low earth orbit (LEO) commercial communications satellites provides an opportunity to dramatically reduce tracking and control costs of these launch vehicles and Unpiloted Aerial Vehicles (UAVs) by reducing or eliminating this ground infrastructure. Additionally, since data transmission is by packetized Internet Protocol (IP), data can be received and commands initiated from practically any location. A low cost Commercial Off The Shelf (COTS) system is currently under development for sounding rockets that also has application to UAVs and scientific balloons. Due to relatively low data rate (9600 baud) currently available, the system will first be used to provide GPS data for tracking and vehicle recovery. Range safety requirements for launch vehicles usually stipulate at least two independent tracking sources. Most sounding rockets flown by NASA now carry GP receivers that output position data via the payload telemetry system to the ground station. The Flight Modem can be configured as a completely separate link thereby eliminating the requirement for tracking radar. The system architecture that integrates antennas, GPS receiver, commercial satellite packet data modem, and a single board computer with custom software is described along with the technical challenges and the plan for their resolution. These include antenna development, high Doppler rates, reliability, environmental ruggedness, hand over between satellites, and data security. An aggressive test plan is included which, in addition to environmental testing, measures bit error rate, latency and antenna patterns. Actual launches on a sounding rocket and various aircraft flights have taken place. Flight tests are planned for the near future on aircraft, long duration balloons and sounding rockets. These results, as well as the current status of the project, are reported.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-09
....--Principal Investigator) has applied for an amendment to Scientific Research Permit No. 14534-01. DATES... during studies of sound production, diving, responses to sound, and other behavior. The research is... significant environmental impacts could result from issuance of the proposed scientific research permit. The...
Causal Uncertainty in the Identification of Environmental Sounds
1986-11-01
importance of particular stimulus properties (Chaney & Webster, 1966; Howard, 1977; Mackie, Wylie, Ridihalgh, Shultz, & Seltzer, 1981; Talamo , 1982; Warren...207). Berlin: Abakon Verlagsgesellschaft, 183-207. Talamo , .J. D. C. (1982). The perception of machinery indicator sounds. ErgonoriS_ 225, 41-51
POLLUTION MONITORING OF PUGET SOUND WITH HONEY BEES
To show that honey bees are effective biological monitors of environmental contaminants over large geographic areas, beekeepers of Puget Sound, Washington, collected pollen and bees for chemical analysis. From these data, kriging maps of arsenic, cadmium, and fluoride were genera...
ERIC Educational Resources Information Center
Lalonde, Kaylah; Holt, Rachael Frush
2014-01-01
Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…
The use of an intraoral electrolarynx for an edentulous patient: a clinical report.
Wee, Alvin G; Wee, Lisa A; Cheng, Ansgar C; Cwynar, Roger B
2004-06-01
This clinical report describes the clinical requirements, treatment sequence, and use of a relatively new intraoral electrolarynx for a completely edentulous patient. This device consists of a sound source attached to the maxilla and a hand-held controller unit that controls the pitch and volume of the intraoral sound source via transmitted radio waves.
Spherical beamforming for spherical array with impedance surface
NASA Astrophysics Data System (ADS)
Tontiwattanakul, Khemapat
2018-01-01
Spherical microphone array beamforming has been a popular research topic for recent years. Due to their isotropic beam in three dimensional spaces as well as a certain frequency range, the arrays are widely used in many applications such as sound field recording, acoustic beamforming, and noise source localisation. The body of a spherical array is usually considered perfectly rigid. A sound field captured by the sensors on spherical array can be decomposed into a series of spherical harmonics. In noise source localisation, the amplitude density of sound sources is estimated and illustrated by mean of colour maps. In this work, a rigid spherical array covered by fibrous materials is studied via numerical simulation and the performance of the spherical beamforming is discussed.
NASA Astrophysics Data System (ADS)
Sugimoto, Tsuneyoshi; Uechi, Itsuki; Sugimoto, Kazuko; Utagawa, Noriyuki; Katakura, Kageyoshi
Hammering test is widely used to inspect the defects in concrete structures. However, this method has a major difficulty in inspect at high-places, such as a tunnel ceiling or a bridge girder. Moreover, its detection accuracy is dependent on a tester's experience. Therefore, we study about the non-contact acoustic inspection method of the concrete structure using the air borne sound wave and a laser Doppler vibrometer. In this method, the concrete surface is excited by air-borne sound wave emitted with a long range acoustic device (LRAD), and the vibration velocity on the concrete surface is measured by a laser Doppler vibrometer. A defect part is detected by the same flexural resonance as the hammer method. It is already shown clearly that detection of a defect can be performed from a long distance of 5 m or more using a concrete test object. Moreover, it is shown that a real concrete structure can also be applied. However, when the conventional LRAD was used as a sound source, there were problems, such as restrictions of a measurement angle and the surrounding noise. In order to solve these problems, basic examination which used the strong ultrasonic wave sound source was carried out. In the experiment, the concrete test object which includes an imitation defect from 5-m distance was used. From the experimental result, when the ultrasonic sound source was used, restrictions of a measurement angle become less severe and it was shown that circumference noise also falls dramatically.
A City Looks at Itself and Acts
ERIC Educational Resources Information Center
Eriksen, Clyde H.
1974-01-01
Prompted by a continued environmental awareness, Claremont, California established an Environmental Resource Task Force to determine methods of operating an economically and environmentally sound city. This group of lay persons and professionals concluded that environmental quality is economically possible, and made recommendations on planning,…
75 FR 39915 - Marine Mammals; File No. 15483
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-13
... whales adjust their bearing to avoid received sound pressure levels greater than 120 dB, which would... marine mammals may be taken by Level B harassment as researchers attempt to provoke an avoidance response through sound transmission into their environment. The sound source consists of a transmitter and...
24 CFR 51.103 - Criteria and standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-night average sound level produced as the result of the accumulation of noise from all sources contributing to the external noise environment at the site. Day-night average sound level, abbreviated as DNL and symbolized as Ldn, is the 24-hour average sound level, in decibels, obtained after addition of 10...
Characterisation of structure-borne sound source using reception plate method.
Putra, A; Saari, N F; Bakri, H; Ramlan, R; Dan, R M
2013-01-01
A laboratory-based experiment procedure of reception plate method for structure-borne sound source characterisation is reported in this paper. The method uses the assumption that the input power from the source installed on the plate is equal to the power dissipated by the plate. In this experiment, rectangular plates having high and low mobility relative to that of the source were used as the reception plates and a small electric fan motor was acting as the structure-borne source. The data representing the source characteristics, namely, the free velocity and the source mobility, were obtained and compared with those from direct measurement. Assumptions and constraints employing this method are discussed.
A Space Based Internet Protocol System for Sub-Orbital Tracking and Control
NASA Technical Reports Server (NTRS)
Bull, Barton; Grant, Charles; Morgan, Dwayne; Streich, Ron; Bauer, Frank (Technical Monitor)
2001-01-01
Personnel from the Goddard Space Flight Center Wallops Flight Facility (GSFC/WFF) in Virginia are responsible for the overall management of the NASA Sounding Rocket Program. Payloads are generally in support of NASA's Space Science Enterprise's missions and return a variety of scientific data as well as providing a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft. The fifteen types of sounding rockets used by NASA can carry payloads of various weights to altitudes from 50 km to more than 1,300 km. Launch activities are conducted not only from established missile ranges, but also from remote locations worldwide requiring mobile tracking and command equipment to be transported and set up at considerable expense. The advent of low earth orbit (LEO) commercial communications satellites provides an opportunity to dramatically reduce tracking and control costs of launch vehicles and Unpiloted Aerial Vehicles (UAVs) by reducing or eliminating this ground infrastructure. Additionally, since data transmission is by packetized Internet Protocol (IP), data can be received and commands initiated from practically any location. A low cost Commercial Off The Shelf (COTS) system is currently under development for sounding rockets which also has application to UAVs and scientific balloons. Due to relatively low data rate (9600 baud) currently available, the system will first be used to provide GPS data for tracking and vehicle recovery. Range safety requirements for launch vehicles usually stipulate at least two independent tracking sources. Most sounding rockets flown by NASA now carry GPS receivers that output position data via the payload telemetry system to the ground station. The Flight Modem can be configured as a completely separate link thereby eliminating requirement for tracking radar. The system architecture which integrates antennas, GPS receiver, commercial satellite packet data modem, and a single board computer with custom software is described along with the technical challenges and the plan for their resolution. These include antenna development, high Doppler rates, reliability, environmental ruggedness, hand over between satellites and data security. An aggressive test plan is included which in addition to environmental testing measures bit error rate, latency and antenna patterns. Actual flight tests are planned for the near future on aircraft, long duration balloons and sounding rockets and these results as well as the current status of the project are reported.
Complete data listings for CSEM soundings on Kilauea Volcano, Hawaii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kauahikaua, J.; Jackson, D.B.; Zablocki, C.J.
1983-01-01
This document contains complete data from a controlled-source electromagnetic (CSEM) sounding/mapping project at Kilauea volcano, Hawaii. The data were obtained at 46 locations about a fixed-location, horizontal, polygonal loop source in the summit area of the volcano. The data consist of magnetic field amplitudes and phases at excitation frequencies between 0.04 and 8 Hz. The vector components were measured in a cylindrical coordinate system centered on the loop source. 5 references.
The Physiological Basis of Chinese Höömii Generation.
Li, Gelin; Hou, Qian
2017-01-01
The study aimed to investigate the physiological basis of vibration mode of sound source of a variety of Mongolian höömii forms of singing in China. The participant is a Mongolian höömii performing artist who was recommended by the Chinese Medical Association of Art. He used three types of höömii, namely vibration höömii, whistle höömii, and overtone höömii, which were compared with general comfortable pronunciation of /i:/ as control. Phonation was observed during /i:/. A laryngostroboscope (Storz) was used to determine vibration source-mucosal wave in the throat. For vibration höömii, bilateral ventricular folds approximated to the midline and made contact at the midline during pronunciation. Ventricular and vocal folds oscillated together as a single unit to form a composite vibration (double oscillator) sound source. For whistle höömii, ventricular folds approximated to the midline to cover part of vocal folds, but did not contact each other. It did not produce mucosal wave. The vocal folds produced mucosal wave to form a single vibration sound source. For overtone höömii, the anterior two-thirds of ventricular folds touched each other during pronunciation. The last one-third produced the mucosal wave. The vocal folds produced mucosal wave at the same time, which was a composite vibration (double oscillator) sound source mode. The Höömii form of singing, including mixed voices and multivoice, was related to the presence of dual vibration sound sources. Its high overtone form of singing (whistle höömii) was related to stenosis at the resonance chambers' initiation site (ventricular folds level). Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Bearing Stake Exercise: Sound Speed and other Environmental Variability
1978-09-01
for acoustic assessment. All bathymetric data analyzed by NORDA were corrected for the speed of sound in seawaer using thc tables of Matthews (1939... thc 1497- to 1520-m/sec isopleths at ranges greater than 50 nm (about 90 kin) followed b’ a mote gradual isopleth inclination at ranges greater than...irregular and clearly showed thc effects of intermixing of SSW, RSIW. and AA.W. VII. (U) ENVIRONMENTAL VARIABILITY AT SITE 5 (C) Site 5 laý at the
2009-09-01
Environmental Analysis and Prediction of Transmission Loss in the Region of the New England Shelfbreak By Heather Rend Hornick B.S., University of... Analysis and Prediction of Transmission Loss in the Region of the New England Shelfbreak 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER... analysis of the ocean sound speed field defined a set of perturbations to the background sound speed field for each of the NEST Scanfish surveys
Modeling the utility of binaural cues for underwater sound localization.
Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo
2014-06-01
The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.
The meaning of city noises: Investigating sound quality in Paris (France)
NASA Astrophysics Data System (ADS)
Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie
2004-05-01
The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.
Acoustic positioning for space processing experiments
NASA Technical Reports Server (NTRS)
Whymark, R. R.
1974-01-01
An acoustic positioning system is described that is adaptable to a range of processing chambers and furnace systems. Operation at temperatures exceeding 1000 C is demonstrated in experiments involving the levitation of liquid and solid glass materials up to several ounces in weight. The system consists of a single source of sound that is beamed at a reflecting surface placed a distance away. Stable levitation is achieved at a succession of discrete energy minima contained throughout the volume between the reflector and the sound source. Several specimens can be handled at one time. Metal discs up to 3 inches in diameter can be levitated, solid spheres of dense material up to 0.75 inches diameter, and liquids can be freely suspended in l-g in the form of near-spherical droplets up to 0.25 inch diameter, or flattened liquid discs up to 0.6 inches diameter. Larger specimens may be handled by increasing the size of the sound source or by reducing the sound frequency.
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
Puget Sound sediment-trap data: 1980-1985. Data report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, A.J.; Baker, E.T.; Feely, R.A.
1991-12-01
In 1979, scientists at the Pacific Marine Environmental Laboratory began investigating the sources, transformation, transport and fate of pollutants in Puget Sound and its watershed under Sec. 202 of the Marine Protection, Research and Sanctuaries Act of 1971 (P.L. 92-532) which called in part for '...a comprehensive and continuing program of research with respect to the possible long range effects of pollution, overfishing, and man-induced changes of ocean ecosystems...' The effort was called the Long-Range Effects Research Program (L-RERP) after language in the Act and was later called the PMEL Marine Environmental Quality Program. The Long-Range Effect Research Program consistedmore » of (1) sampling dissolved and particulate constituents in the water column by bottle sampling, (2) sampling settling particles by sediment trap and (3) sampling sediments by grab, box, gravity and Kasten corers. In the Data Report, a variety of data from particles collected in 104 traps deployed on 34 moorings in open waters between 1980 and 1985 are presented. The text of the data report begins with the sampling and analytical methods with the accompanying quality control/quality assurance data. The text of the data sections are a summary of the available data and published literature in which the data is interpreted along with a catalogue of the data available in the Appendix (on microfiche located in the back pocket of the data report).« less
The Influence of refractoriness upon comprehension of non-verbal auditory stimuli.
Crutch, Sebastian J; Warrington, Elizabeth K
2008-01-01
An investigation of non-verbal auditory comprehension in two patients with global aphasia following stroke is reported. The primary aim of the investigation was to establish whether refractory access disorders can affect non-verbal input modalities. All previous reports of refractoriness, a cognitive syndrome characterized by response inconsistency, sensitivity to temporal factors and insensitivity to item frequency, have involved comprehension tasks which have a verbal component. Two main experiments are described. The first consists of a novel sound-to-picture and sound-to-word matching task in which comprehension of environmental sounds is probed under conditions of semantic relatedness and semantic unrelatedness. In addition to the two stroke patients, the performance of a group of 10 control patients with non-vascular pathology is reported, along with evidence of semantic relatedness effects in sound comprehension. The second experiment examines environmental sound comprehension within a repetitive probing paradigm which affords assessment of the effects of semantic relatedness, response consistency and presentation rate. It is demonstrated that the two stroke patients show a significant increase in error rate across multiple probes of the same set of sound stimuli, indicating the presence of refractoriness within this non-verbal domain. The implications of the results are discussed with reference to our current understanding of the mechanisms of refractoriness.
NASA Technical Reports Server (NTRS)
Groza, A.; Calciu, J.; Nicola, I.; Ionasek, A.
1974-01-01
Sound level measurements on noise sources on buses are used to observe the effects of attenuating acoustic pressure levels inside the bus by sound-proofing during complete repair. A spectral analysis of the sound level as a function of motor speed, bus speed along the road, and the category of the road is reported.
Nystuen, Jeffrey A; Moore, Sue E; Stabeno, Phyllis J
2010-07-01
Ambient sound in the ocean contains quantifiable information about the marine environment. A passive aquatic listener (PAL) was deployed at a long-term mooring site in the southeastern Bering Sea from 27 April through 28 September 2004. This was a chain mooring with lots of clanking. However, the sampling strategy of the PAL filtered through this noise and allowed the background sound field to be quantified for natural signals. Distinctive signals include the sound from wind, drizzle and rain. These sources dominate the sound budget and their intensity can be used to quantify wind speed and rainfall rate. The wind speed measurement has an accuracy of +/-0.4 m s(-1) when compared to a buoy-mounted anemometer. The rainfall rate measurement is consistent with a land-based measurement in the Aleutian chain at Cold Bay, AK (170 km south of the mooring location). Other identifiable sounds include ships and short transient tones. The PAL was designed to reject transients in the range important for quantification of wind speed and rainfall, but serendipitously recorded peaks in the sound spectrum between 200 Hz and 3 kHz. Some of these tones are consistent with whale calls, but most are apparently associated with mooring self-noise.
40 CFR 201.22 - Measurement instrumentation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Measurement instrumentation. 201.22 Section 201.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT... Criteria § 201.22 Measurement instrumentation. (a) A sound level meter or alternate sound level measurement...
40 CFR 201.22 - Measurement instrumentation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Measurement instrumentation. 201.22 Section 201.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT... Criteria § 201.22 Measurement instrumentation. (a) A sound level meter or alternate sound level measurement...
40 CFR 201.22 - Measurement instrumentation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Measurement instrumentation. 201.22 Section 201.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT... Criteria § 201.22 Measurement instrumentation. (a) A sound level meter or alternate sound level measurement...
Sound Naming in Neurodegenerative Disease
ERIC Educational Resources Information Center
Chow, Maggie L.; Brambati, Simona M.; Gorno-Tempini, Maria Luisa; Miller, Bruce L.; Johnson, Julene K.
2010-01-01
Modern cognitive neuroscientific theories and empirical evidence suggest that brain structures involved in movement may be related to action-related semantic knowledge. To test this hypothesis, we examined the naming of environmental sounds in patients with corticobasal degeneration (CBD) and progressive supranuclear palsy (PSP), two…
DOT National Transportation Integrated Search
2004-01-01
Tire-pavement interaction noise is one of the significant environmental problem in highly populated urban areas situated near busy highways. Traditionally, this problem was reduced through the use of sound barriers ; but this method has limitations. ...
Functional morphology of the sound-generating labia in the syrinx of two songbird species.
Riede, Tobias; Goller, Franz
2010-01-01
In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function.
Functional morphology of the sound-generating labia in the syrinx of two songbird species
Riede, Tobias; Goller, Franz
2010-01-01
In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function. PMID:19900184
Theory of acoustic design of opera house and a design proposal
NASA Astrophysics Data System (ADS)
Ando, Yoichi
2004-05-01
First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.
Miller, Patrick J O
2006-05-01
Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.
Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.
Cummings, W C; Holliday, D V
1987-09-01
Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.
Sound field separation with sound pressure and particle velocity measurements.
Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin
2012-12-01
In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.
Delgutte, Bertrand
2015-01-01
At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292
Active noise control using a steerable parametric array loudspeaker.
Tanaka, Nobuo; Tanaka, Motoki
2010-06-01
Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.
A mechanism study of sound wave-trapping barriers.
Yang, Cheng; Pan, Jie; Cheng, Li
2013-09-01
The performance of a sound barrier is usually degraded if a large reflecting surface is placed on the source side. A wave-trapping barrier (WTB), with its inner surface covered by wedge-shaped structures, has been proposed to confine waves within the area between the barrier and the reflecting surface, and thus improve the performance. In this paper, the deterioration in performance of a conventional sound barrier due to the reflecting surface is first explained in terms of the resonance effect of the trapped modes. At each resonance frequency, a strong and mode-controlled sound field is generated by the noise source both within and in the vicinity outside the region bounded by the sound barrier and the reflecting surface. It is found that the peak sound pressures in the barrier's shadow zone, which correspond to the minimum values in the barrier's insertion loss, are largely determined by the resonance frequencies and by the shapes and losses of the trapped modes. These peak pressures usually result in high sound intensity component impinging normal to the barrier surface near the top. The WTB can alter the sound wave diffraction at the top of the barrier if the wavelengths of the sound wave are comparable or smaller than the dimensions of the wedge. In this case, the modified barrier profile is capable of re-organizing the pressure distribution within the bounded domain and altering the acoustic properties near the top of the sound barrier.
Surface acoustical intensity measurements on a diesel engine
NASA Technical Reports Server (NTRS)
Mcgary, M. C.; Crocker, M. J.
1980-01-01
The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.
NASA Astrophysics Data System (ADS)
Cowan, James
This chapter summarizes and explains key concepts of building acoustics. These issues include the behavior of sound waves in rooms, the most commonly used rating systems for sound and sound control in buildings, the most common noise sources found in buildings, practical noise control methods for these sources, and the specific topic of office acoustics. Common noise issues for multi-dwelling units can be derived from most of the sections of this chapter. Books can be and have been written on each of these topics, so the purpose of this chapter is to summarize this information and provide appropriate resources for further exploration of each topic.
NASA Astrophysics Data System (ADS)
Bank, M. S.
2017-12-01
The Minamata Convention on Mercury was recently ratified and will go into effect on August 16, 2017. As noted in the convention text, fish are an important source of nutrition to consumers worldwide and several marine and freshwater species represent important links in the global source-receptor dynamics of methylmercury. However, despite its importance, a coordinated global program for marine and freshwater fish species using accredited laboratories, reproducible data and reliable models is still lacking. In recent years fish mercury science has evolved significantly with its use of advanced technologies and computational models to address this complex and ubiquitous environmental and public health issue. These advances in the field have made it essential that transparency be enhanced to ensure that fish mercury studies used in support of the convention are truly reproducible and scientifically sound. One primary goal of this presentation is to evaluate fish bioinformatics and methods, results and inferential reproducibility as it relates to aggregated uncertainty in mercury fish research models, science, and biomonitoring. I use models, environmental intelligence networks and simulations of the effects of a changing climate on methylmercury in marine and freshwater fish to examine how climate change and the convention itself may create further uncertainties for policymakers to consider. Lastly, I will also present an environmental intelligence framework for fish mercury bioaccumulation models and biomonitoring in support of the evaluation of the effectiveness of the Minamata Convention on Mercury.
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
2017-02-04
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
On Identifying the Sound Sources in a Turbulent Flow
NASA Technical Reports Server (NTRS)
Goldstein, M. E.
2008-01-01
A space-time filtering approach is used to divide an unbounded turbulent flow into its radiating and non-radiating components. The result is then used to clarify a number of issues including the possibility of identifying the sources of the sound in such flows. It is also used to investigate the efficacy of some of the more recent computational approaches.
Port, Jesse A; Wallace, James C; Griffith, William C; Faustman, Elaine M
2012-01-01
Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial public health monitoring as well as more targeted and functionally-based investigations.
The sound field of a rotating dipole in a plug flow.
Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H
2018-04-01
An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Conceptual Sound System Design for Clifford Odets' "GOLDEN BOY"
NASA Astrophysics Data System (ADS)
Yang, Yen Chun
There are two different aspects in the process of sound design, "Arts" and "Science". In my opinion, the sound design should engage both aspects strongly and in interaction with each other. I started the process of designing the sound for GOLDEN BOY by building the city soundscape of New York City in 1937. The scenic design for this piece is designed in the round, putting the audience all around the stage; this gave me a great opportunity to use surround and specialization techniques to transform the space into a different sonic world. My specialization design is composed of two subsystems -- one is the four (4) speakers center cluster diffusing towards the four (4) sections of audience, and the other is the four (4) speakers on the four (4) corners of the theatre. The outside ring provides rich sound source localization and the inside ring provides more support for control of the specialization details. In my design four (4) lavalier microphones are hung under the center iron cage from the four (4) corners of the stage. Each microphone is ten (10) feet above the stage. The signal for each microphone is sent to the two (2) center speakers in the cluster diagonally opposite the microphone. With the appropriate level adjustment of the microphones, the audience will not notice the amplification of the voices; however, through my specialization system, the presence and location of the voices of all actors are preserved for all audiences clearly. With such vocal reinforcements provided by the microphones, I no longer need to worry about overwhelming the dialogue on stage by the underscoring. A successful sound system design should not only provide a functional system, but also take the responsibility of bringing actors' voices to the audience and engaging the audience with the world that we create on stage. By designing a system which reinforces the actors' voices while at the same time providing control over localization of movement of sound effects, I was able not only to make the text present and clear for the audiences, but also to support the storyline strongly through my composed music, environmental soundscapes, and underscoring.
Active Exhaust Silencing Systen For the Management of Auxillary Power Unit Sound Signatures
2014-08-01
conceptual mass-less pistons are introduced into the system before and after the injection site, such that they will move exactly with the plane wave...Unit Sound Signatures, Helminen, et al. Page 2 of 7 either the primary source or the injected source. It is assumed that the pistons are ‘close...source, it causes both pistons to move identically. The pressures induced by the flow on the pistons do not affect the flow generated by the
The rotary subwoofer: a controllable infrasound source.
Park, Joseph; Garcés, Milton; Thigpen, Bruce
2009-04-01
The rotary subwoofer is a novel acoustic transducer capable of projecting infrasonic signals at high sound pressure levels. The projector produces higher acoustic particle velocities than conventional transducers which translate into higher radiated sound pressure levels. This paper characterizes measured performance of a rotary subwoofer and presents a model to predict sound pressure levels.
Physics of thermo-acoustic sound generation
NASA Astrophysics Data System (ADS)
Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.
2013-09-01
We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.
Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard
2010-02-01
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.
Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin
2018-04-25
Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.
A computational engine for bringing environmental consequence analysis into aviation decision-making
DOT National Transportation Integrated Search
2010-04-21
This presentation looks at the methods for ambient masking of non-natural sounds. The masking of sounds is most effective when the masker spectrum overlaps the signal spectrum; more likely to occur if the masker is broadband in nature. Land vehicles ...
Composing Sound Identity in Taiko Drumming
ERIC Educational Resources Information Center
Powell, Kimberly A.
2012-01-01
Although sociocultural theories emphasize the mutually constitutive nature of persons, activity, and environment, little attention has been paid to environmental features organized across sensory dimensions. I examine sound as a dimension of learning and practice, an organizing presence that connects the sonic with the social. This ethnographic…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefkovitz, L.F.; Cullinan, V.I.; Crecelius, E.A.
The purpose of the study is to: (1) continue monitoring historical trends in the concentration of contaminants in Puget Sound sediments, and (2) quantify recent trends in the recovery of contaminated sediments. Results from this study can be compared with those obtained in the 1982 study to determine whether sediment quality is still improving and to estimate the rate of recovery. A statistically significant reduction in sediment contamination over the past 20 years would provide empirical evidence that environmental regulation has had a positive impact on the water quality in Puget Sound. Chemical trends were evaluated from six age-dated sedimentmore » cores collected from the main basin of Puget Sound. Chemical analyses included metals, polynuclear aromatic hydrocarbons (PAHs), PCBs and chlorinated pesticides, nutrients (total nitrogen (N), and phosphorus (P)), butyl tins, and total organic carbon (TOC). Sedimentation (cm/yr) and deposition rates (g/sq cm/yr) were estimated using a steady-state Pb-210 dating technique.« less
NASA Astrophysics Data System (ADS)
Stocking, Jessica; Bishop, Mary Anne; Arab, Ali
2018-01-01
Understanding bird distributions outside of the breeding season may help to identify important criteria for winter refuge. We surveyed marine birds in Prince William Sound, Alaska, USA over nine winters from 2007 to 2016. Our objectives were twofold: to examine the seasonal patterns of piscivorous species overwintering in Prince William Sound, and to explore the relationships between spatial covariates and bird distributions, accounting for inherent spatial structure. We used hurdle models to examine nine species groups of piscivorous seabirds: loons, grebes, cormorants, mergansers, large gulls, small gulls, kittiwakes, Brachyramphus murrelets, and murres. Seven groups showed pronounced seasonal patterns. The models with the most support identified water depth and distance to shore as key environmental covariates, while habitat type, wave exposure, sea surface temperature and seafloor slope had less support. Environmental associations are consistent with the available knowledge of forage fish distribution during this time, but studies that address habitat associations of prey fish in winter could strengthen our understanding of processes in Prince William Sound.
Piazza, Bryan P.; LaPeyre, Megan K.; Keim, B.D.
2010-01-01
Climate creates environmental constraints (filters) that affect the abundance and distribution of species. In estuaries, these constraints often result from variability in water flow properties and environmental conditions (i.e. water flow, salinity, water temperature) and can have significant effects on the abundance and distribution of commercially important nekton species. We investigated links between large-scale climate variability and juvenile brown shrimp Farfantepenaeus aztecus abundance in Breton Sound estuary, Louisiana (USA). Our goals were to (1) determine if a teleconnection exists between local juvenile brown shrimp abundance and the El Niño Southern Oscillation (ENSO) and (2) relate that linkage to environmental constraints that may affect juvenile brown shrimp recruitment to, and survival in, the estuary. Our results identified a teleconnection between winter ENSO conditions and juvenile brown shrimp abundance in Breton Sound estuary the following spring. The physical connection results from the impact of ENSO on winter weather conditions in Breton Sound (air pressure, temperature, and precipitation). Juvenile brown shrimp abundance effects lagged ENSO by 3 mo: lower than average abundances of juvenile brown shrimp were caught in springs following winter El Niño events, and higher than average abundances of brown shrimp were caught in springs following La Niña winters. Salinity was the dominant ENSO-forced environmental filter for juvenile brown shrimp. Spring salinity was cumulatively forced by winter river discharge, winter wind forcing, and spring precipitation. Thus, predicting brown shrimp abundance requires incorporating climate variability into models.
Numerical Modelling of the Sound Fields in Urban Streets with Diffusely Reflecting Boundaries
NASA Astrophysics Data System (ADS)
KANG, J.
2002-12-01
A radiosity-based theoretical/computer model has been developed to study the fundamental characteristics of the sound fields in urban streets resulting from diffusely reflecting boundaries, and to investigate the effectiveness of architectural changes and urban design options on noise reduction. Comparison between the theoretical prediction and the measurement in a scale model of an urban street shows very good agreement. Computations using the model in hypothetical rectangular streets demonstrate that though the boundaries are diffusely reflective, the sound attenuation along the length is significant, typically at 20-30 dB/100 m. The sound distribution in a cross-section is generally even unless the cross-section is very close to the source. In terms of the effectiveness of architectural changes and urban design options, it has been shown that over 2-4 dB extra attenuation can be obtained either by increasing boundary absorption evenly or by adding absorbent patches on the façades or the ground. Reducing building height has a similar effect. A gap between buildings can provide about 2-3 dB extra sound attenuation, especially in the vicinity of the gap. The effectiveness of air absorption on increasing sound attenuation along the length could be 3-9 dB at high frequencies. If a treatment is effective with a single source, it is also effective with multiple sources. In addition, it has been demonstrated that if the façades in a street are diffusely reflective, the sound field of the street does not change significantly whether the ground is diffusely or geometrically reflective.
2015-09-30
soundscapes , and unit of analysis methodology. The study has culminated in a complex analysis of all environmental factors that could be predictors of...regional soundscapes . To build the correlation matrices from ambient sound recordings, the raw data was first converted into a series of sound...sounds. To compare two different soundscape time periods, the correlation matrices for the two periods were then subtracted from each other
Röder, Brigitte; Rösler, Frank
2003-10-01
Several recent reports suggest compensatory performance changes in blind individuals. It has, however, been argued that the lack of visual input leads to impoverished semantic networks resulting in the use of data-driven rather than conceptual encoding strategies on memory tasks. To test this hypothesis, congenitally blind and sighted participants encoded environmental sounds either physically or semantically. In the recognition phase, both conceptually as well as physically distinct and physically distinct but conceptually highly related lures were intermixed with the environmental sounds encountered during study. Participants indicated whether or not they had heard a sound in the study phase. Congenitally blind adults showed elevated memory both after physical and semantic encoding. After physical encoding blind participants had lower false memory rates than sighted participants, whereas the false memory rates of sighted and blind participants did not differ after semantic encoding. In order to address the question if compensatory changes in memory skills are restricted to critical periods during early childhood, late blind adults were tested with the same paradigm. When matched for age, they showed similarly high memory scores as the congenitally blind. These results demonstrate compensatory performance changes in long-term memory functions due to the loss of a sensory system and provide evidence for high adaptive capabilities of the human cognitive system.
Slevc, L Robert; Shell, Alison R
2015-01-01
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
Impact of hospital-based environmental exposures on neurodevelopmental outcomes of preterm infants.
Santos, Janelle; Pearce, Sarah E; Stroustrup, Annemarie
2015-04-01
Over 300,000 infants are hospitalized in a neonatal intensive care unit (NICU) in the United States annually during a developmental period critical to later neurobehavioral function. Environmental exposures during the fetal period and infancy have been shown to impact long-term neurobehavioral outcomes. This review summarizes evidence linking NICU-based environmental exposures to neurodevelopmental outcomes of children born preterm. Preterm infants experience multiple exposures important to neurodevelopment during the NICU hospitalization. The physical layout of the NICU, management of light and sound, social interactions with parents and NICU staff, and chemical exposures via medical equipment are important to long-term neurobehavioral outcomes in this highly vulnerable population. Existing research documents NICU-based exposure to neurotoxic chemicals, aberrant light, excess sound, and restricted social interaction. In total, this creates an environment of co-existing excesses (chemicals, light, sound) and deprivation (touch, speech). The full impact of these co-exposures on the long-term neurodevelopment of preterm infants has not been adequately elucidated. Research into the importance of the NICU from an environmental health perspective is in its infancy, but could provide understanding about critical modifiable factors impacting the neurobehavioral health of hundreds of thousands of children each year.