Sample records for sounding system bbss

  1. Meteorological Automatic Weather Station (MAWS) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holdridge, Donna J; Kyrouac, Jenni A

    The Meteorological Automatic Weather Station (MAWS) is a surface meteorological station, manufactured by Vaisala, Inc., dedicated to the balloon-borne sounding system (BBSS), providing surface measurements of the thermodynamic state of the atmosphere and the wind speed and direction for each radiosonde profile. These data are automatically provided to the BBSS during the launch procedure and included in the radiosonde profile as the surface measurements of record for the sounding. The MAWS core set of measurements is: Barometric Pressure (hPa), Temperature (°C), Relative Humidity (%), Arithmetic-Averaged Wind Speed (m/s), and Vector-Averaged Wind Direction (deg). The sensors that collect the core variablesmore » are mounted at the standard heights defined for each variable.« less

  2. Study of meteorological parameters over the central Himalayan region using balloon-borne sensor

    NASA Astrophysics Data System (ADS)

    Shrivastava, Rahul; Naja, Manish; Gwal, A. K.

    2013-06-01

    In the present paper we accumulate the recent advances in atmospheric research by analyzing meteorological data. We have calculated meteorological parameters over the central Himalayan region at Nainital (longitude 79.45□ E, latitude 29.35□N). It is a high altitude place (1951 meters) which is very useful for such type of measurement. We have done our work on meteorological parameters in GVAX (Ganges Valley Aerosol Experiment) project. It was an American-Indo project which was use to capture pre-monsoon to post-monsoon conditions to establish a comprehensive baseline for advancements in the study of the effects of Atmospheric conditions of the Ganges Valley. The Balloon Borne Sounding System (BBSS) technique was also used for in-situ measurements of meteorological parameters.

  3. Enhancing pesticide degradation using indigenous microorganisms isolated under high pesticide load in bioremediation systems with vermicomposts.

    PubMed

    Castillo Diaz, Jean Manuel; Delgado-Moreno, Laura; Núñez, Rafael; Nogales, Rogelio; Romero, Esperanza

    2016-08-01

    In biobed bioremediation systems (BBSs) with vermicomposts exposed to a high load of pesticides, 6 bacteria and 4 fungus strains were isolated, identified, and investigated to enhance the removal of pesticides. Three different mixtures of BBSs composed of vermicomposts made from greenhouse (GM), olive-mill (OM) and winery (WM) wastes were contaminated, inoculated, and incubated for one month (GMI, OMI and WMI). The inoculums maintenance was evaluated by DGGE and Q-PCR. Pesticides were monitored by HPLC-DAD. The highest bacterial and fungal abundance was observed in WMI and OMI respectively. In WMI, the consortia improved the removal of tebuconazole, metalaxyl, and oxyfluorfen by 1.6-, 3.8-, and 7.7-fold, respectively. The dissipation of oxyfluorfen was also accelerated in OMI, with less than 30% remaining after 30d. One metabolite for metalaxyl and 4 for oxyfluorfen were identified by GC-MS. The isolates could be suitable to improve the efficiency of bioremediation systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Baha implant as a hearing solution for single-sided deafness after retrosigmoid approach for the vestibular schwannoma: audiological results.

    PubMed

    Bouček, Jan; Vokřál, Jan; Černý, Libor; Chovanec, Martin; Zábrodský, Michal; Zvěřina, Eduard; Betka, Jan; Skřivan, Jiří

    2017-01-01

    Skull base tumors and, in particular, vestibular schwannoma (VS) are among the etiological reasons for single-sided deafness (SSD). Patients with SSD have problems in understanding speech in a noisy environment and cannot localize the direction of sounds. For the majority, this is the handicap for which they try to find a solution. Apart from CROS hearing aids, Baha is one of the most frequently used systems for SSD compensation. 38 patients with single-sided deafness after retrosigmoid removal of a vestibular schwannoma underwent testing with a Baha softband from September 2010 to August 2014. Sixteen patients (42 %) finally decided to accept Baha implantation. Subjective experience with the Baha softband was evaluated by patients using the BBSS questionnaire immediately after testing. Objective evaluation of the effect was performed as a measurement of the sentence discrimination score in noise and side horizontal discrimination without a Baha and 6 weeks and 12 months after a sound processor fitting. There was a significant improvement in sentence discrimination in the 6 week (64.0 %) and 1 year (74.6 %) interval of follow-up in comparison with understanding without Baha (24.0 %, p = 0.001) in situations when sentences are coming from the side of the non-hearing ear and noise contralaterally with SNR -5 dB. Baha can significantly improve sentence discrimination in complex-listening situation in patients with SSD after the VS surgery.

  5. Temporary placement of fully covered self-expandable metal stents in benign biliary strictures.

    PubMed

    Ryu, Choong Heon; Kim, Myung Hwan; Lee, Sang Soo; Park, Do Hyun; Seo, Dong Wan; Lee, Sung Koo

    2013-07-01

    Benign biliary strictures (BBSs) have been endoscopically managed with plastic stent placement. However, data regarding fully covered self-expandable metal stents (FCSEMSs) in BBS patients remain scarce in Korea. Forty-one patients (21 men, 65.9%) with BBSs underwent FCSEMS placement between February 2007 and July 2010 in Asan Medical Center. Efficacy and safety were evaluated retrospectively. Patients were considered to have resolution if they showed evidence of stricture resolution on cholangiography and if an inflated retrieval balloon easily passed through the strictures at FCSEMS removal. The mean FCSEMS placement time was 3.2 (1.9-6.2) months. Patients were followed for a mean of 10.2 (1.0-32.0) months after FCSEMS removal. The BBS resolution rate was confirmed in 38 of 41 (92.7%) patients who underwent FCSEMS removal. After FCSEMS removal, 6 of 38 (15.8%) patients experienced symptomatic recurrent stricture and repeat stenting was performed. When a breakdown by etiology of stricture was performed, 14 of 15 (93.3%) patients with chronic pancreatitis, 17 of 19 (89.5%) with gall stone-related disease, 4 of 4 (100%) with surgical procedures, and 2 of 2 (100%) with BBSs of other etiology had resolution at FCSEMS removal. Complications related to stent therapy occurred in 12 (29%) patients, including post-ERCP pancreatitis (n=4), proximal migration (n=3), distal migration (n=3), and occlusion (n=2). Temporary FCSEMS placement in BBS patients offers a potential alternative to plastic stenting. However, because of the significant complications and modest resolution rates, the potential benefits and risks should be evaluated in further investigations.

  6. Comparison of precipitable water vapor measurements obtained by microwave radiometry and radiosondes at the Southern Great ...

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lesht, B.M.; Liljegren, J.C.

    1996-12-31

    Comparisons between the precipitable water vapor (PWV) estimated by passive microwave radiometers (MWRs) and that obtained by integrating the vertical profile of water vapor density measured by radiosondes (BBSS) have generally shown good agreement. These comparisons, however, have usually been done over rather short time periods and consequently within limited ranges of total PWV and with limited numbers of radiosondes. We have been making regular comparisons between MWR and BBSS estimates of PWV at the Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site since late 1992 as part of an ongoing quality measurement experiment (QME). This suite of comparisonsmore » spans three annual cycles and a relatively wide range of total PWV amounts. Our findings show that although for the most part the agreement is excellent, differences between the two measurements occur. These differences may be related to the MWR retrieval of PWV and to calibration variations between radiosonde batches.« less

  7. Benign biliary strictures refractory to standard bilioplasty treated using polydoxanone biodegradable biliary stents: retrospective multicentric data analysis on 107 patients.

    PubMed

    Mauri, Giovanni; Michelozzi, Caterina; Melchiorre, Fabio; Poretti, Dario; Pedicini, Vittorio; Salvetti, Monica; Criado, Eva; Falcò Fages, Joan; De Gregorio, Miguel Ángel; Laborda, Alicia; Sonfienza, Luca Maria; Cornalba, Gianpaolo; Monfardini, Lorenzo; Panek, Jiri; Andrasina, Tomas; Gimenez, Mariano

    2016-11-01

    To assess mid-term outcome of biodegradable biliary stents (BBSs) to treat benign biliary strictures refractory to standard bilioplasty. Institutional review board approval was obtained and patient consent was waived. 107 patients (61 males, 46 females, mean age 59 ± 16 years), were treated. Technical success and complications were recorded. Ninety-seven patients (55 males, 42 females, aged 57 ± 17 years) were considered for follow-up analysis (mean follow-up 23 ± 12 months). Fisher's exact test and Mann-Whitney U tests were used and a Kaplan-Meier curve was calculated. The procedure was always feasible. In 2/107 cases (2 %), stent migration occurred (technical success 98 %). 4/107 patients (4 %) experienced mild haemobilia. No major complications occurred. In 19/97 patients (18 %), stricture recurrence occurred. In this group, higher rate of subsequent cholangitis (84.2 % vs. 12.8 %, p = 0.001) and biliary stones (26.3 % vs. 2.5 %, p = 0.003) was noted. Estimated mean time to stricture recurrence was 38 months (95 % C.I 34-42 months). Estimated stricture recurrence rate at 1, 2, and 3 years was respectively 7.2 %, 26.4 %, and 29.4 %. Percutaneous placement of a BBS is a feasible and safe strategy to treat benign biliary strictures refractory to standard bilioplasty, with promising results in the mid-term period. • Percutaneous placement of a BBS is 100 % feasible. • The procedure appears free from major complications, with few minor complications. • BBSs offer promising results in the mid-term period. • With a BBS, external catheter/drainage can be removed early. • BBSs represent a new option in treating benign biliary stenosis.

  8. Speech intelligibility and subjective benefit in single-sided deaf adults after cochlear implantation.

    PubMed

    Finke, Mareike; Strauß-Schier, Angelika; Kludt, Eugen; Büchner, Andreas; Illg, Angelika

    2017-05-01

    Treatment with cochlear implants (CIs) in single-sided deaf individuals started less than a decade ago. CIs can successfully reduce incapacitating tinnitus on the deaf ear and allow, so some extent, the restoration of binaural hearing. Until now, systematic evaluations of subjective CI benefit in post-lingually single-sided deaf individuals and analyses of speech intelligibility outcome for the CI in isolation have been lacking. For the prospective part of this study, the Bern Benefit in Single-Sided Deafness Questionnaire (BBSS) was administered to 48 single-sided deaf CI users to evaluate the subjectively perceived CI benefit across different listening situations. In the retrospective part, speech intelligibility outcome with the CI up to 12 month post-activation was compared between 100 single-sided deaf CI users and 125 bilaterally implanted CI users (2nd implant). The positive median ratings in the BBSS differed significantly from zero for all items suggesting that most individuals with single-sided deafness rate their CI as beneficial across listening situations. The speech perception scores in quiet and noise improved significantly over time in both groups of CI users. Speech intelligibility with the CI in isolation was significantly better in bilaterally implanted CI users (2nd implant) compared to the scores obtained from single-sided deaf CI users. Our results indicate that CI users with single-sided deafness can reach open set speech understanding with their CI in isolation, encouraging the extension of the CI indication to individuals with normal hearing on the contralateral ear. Compared to the performance reached with bilateral CI users' second implant, speech reception threshold are lower, indicating an aural preference and dominance of the normal hearing ear. The results from the BBSS propose good satisfaction with the CI across several listening situations. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. U.S. Fish and Wildlife Service breeding bird surveys: How can they be used in forest management?

    Treesearch

    William F. Laudenslayer

    1988-01-01

    Since 1965, the U.S. Fish and Wildlife and Canadian Wildlife Services have sponsored annual Brceding Bird Surveys (BBSs), which are done in the United States and Canada using standard procedures. Data resulting from individual surveys may have Cptential to answer certain management questions or serve to fill information gaps for relatively small geographic areas. A BBS...

  10. Combining public participatory surveillance and occupancy modelling to predict the distributional response of Ixodes scapularis to climate change.

    PubMed

    Lieske, David J; Lloyd, Vett K

    2018-03-01

    Ixodes scapularis, a known vector of Borrelia burgdorferi sensu stricto (Bbss), is undergoing range expansion in many parts of Canada. The province of New Brunswick, which borders jurisdictions with established populations of I. scapularis, constitutes a range expansion zone for this species. To better understand the current and potential future distribution of this tick under climate change projections, this study applied occupancy modelling to distributional records of adult ticks that successfully overwintered, obtained through passive surveillance. This study indicates that I. scapularis occurs throughout the southern-most portion of the province, in close proximity to coastlines and major waterways. Milder winter conditions, as indicated by the number of degree days <0 °C, was determined to be a strong predictor of tick occurrence, as was, to a lesser degree, rising levels of annual precipitation, leading to a final model with a predictive accuracy of 0.845 (range: 0.828-0.893). Both RCP 4.5 and RCP 8.5 climate projections predict that a significant proportion of the province (roughly a quarter to a third) will be highly suitable for I. scapularis by the 2080s. Comparison with cases of canine infection show good spatial agreement with baseline model predictions, but the presence of canine Borrelia infections beyond the climate envelope, defined by the highest probabilities of tick occurrence, suggest the presence of Bbss-carrying ticks distributed by long-range dispersal events. This research demonstrates that predictive statistical modelling of multi-year surveillance information is an efficient way to identify areas where I. scapularis is most likely to occur, and can be used to guide subsequent active sampling efforts in order to better understand fine scale species distributional patterns. Copyright © 2018 The Authors. Published by Elsevier GmbH.. All rights reserved.

  11. HIV prevalence among men who have sex with men in Brazil: results of the 2nd national survey using respondent-driven sampling.

    PubMed

    Kerr, Ligia; Kendall, Carl; Guimarães, Mark Drew Crosland; Salani Mota, Rosa; Veras, Maria Amélia; Dourado, Inês; Maria de Brito, Ana; Merchan-Hamann, Edgar; Pontes, Alexandre Kerr; Leal, Andréa Fachel; Knauth, Daniela; Castro, Ana Rita Coimbra Motta; Macena, Raimunda Hermelinda Maia; Lima, Luana Nepomuceno Costa; Oliveira, Lisangela Cristina; Cavalcantee, Maria do Socorro; Benzaken, Adele Schwartz; Pereira, Gerson; Pimenta, Cristina; Pascom, Ana Roberta Pati; Bermudez, Ximena Pamela Diaz; Moreira, Regina Célia; Brígido, Luis Fernando Macedo; Camillo, Ana Cláudia; McFarland, Willi; Johnston, Lisa G

    2018-05-01

    This paper reports human immuno-deficiency virus (HIV) prevalence in the 2nd National Biological and Behavioral Surveillance Survey (BBSS) among men who have sex with men (MSM) in 12 cities in Brazil using respondent-driven sampling (RDS).Following formative research, RDS was applied in 12 cities in the 5 macroregions of Brazil between June and December 2016 to recruit MSM for BBSS. The target sample size was 350 per city. Five to 6 seeds were initially selected to initiate recruitment and coupons and interviews were managed online. On-site rapid testing was used for HIV screening, and confirmed by a 2nd test. Participants were weighted using Gile estimator. Data from all 12 cities were merged and analyzed with Stata 14.0 complex survey data analysis tools in which each city was treated as its own strata. Missing data for those who did not test were imputed HIV+ if they reported testing positive before and were taking antiretroviral therapy.A total of 4176 men were recruited in the 12 cities. The average time to completion was 10.2 weeks. The longest chain length varied from 8 to 21 waves. The sample size was achieved in all but 2 cities.A total of 3958 of the 4176 respondents agreed to test for HIV (90.2%). For results without imputation, 17.5% (95%CI: 14.7-20.7) of our sample was HIV positive. With imputation, 18.4% (95%CI: 15.4-21.7) were seropositive.HIV prevalence increased beyond expectations from the results of the 2009 survey (12.1%; 95%CI: 10.0-14.5) to 18.4%; CI95%: 15.4 to 21.7 in 2016. This increase accompanies Brazil's focus on the treatment to prevention strategy, and a decrease in support for community-based organizations and community prevention programs.

  12. HIV prevalence among men who have sex with men in Brazil: results of the 2nd national survey using respondent-driven sampling

    PubMed Central

    Kerr, Ligia; Kendall, Carl; Guimarães, Mark Drew Crosland; Salani Mota, Rosa; Veras, Maria Amélia; Dourado, Inês; Maria de Brito, Ana; Merchan-Hamann, Edgar; Pontes, Alexandre Kerr; Leal, Andréa Fachel; Knauth, Daniela; Castro, Ana Rita Coimbra Motta; Macena, Raimunda Hermelinda Maia; Lima, Luana Nepomuceno Costa; Oliveira, Lisangela Cristina; Cavalcantee, Maria do Socorro; Benzaken, Adele Schwartz; Pereira, Gerson; Pimenta, Cristina; Pascom, Ana Roberta Pati; Bermudez, Ximena Pamela Diaz; Moreira, Regina Célia; Brígido, Luis Fernando Macedo; Camillo, Ana Cláudia; McFarland, Willi; Johnston, Lisa G.

    2018-01-01

    Abstract This paper reports human immuno-deficiency virus (HIV) prevalence in the 2nd National Biological and Behavioral Surveillance Survey (BBSS) among men who have sex with men (MSM) in 12 cities in Brazil using respondent-driven sampling (RDS). Following formative research, RDS was applied in 12 cities in the 5 macroregions of Brazil between June and December 2016 to recruit MSM for BBSS. The target sample size was 350 per city. Five to 6 seeds were initially selected to initiate recruitment and coupons and interviews were managed online. On-site rapid testing was used for HIV screening, and confirmed by a 2nd test. Participants were weighted using Gile estimator. Data from all 12 cities were merged and analyzed with Stata 14.0 complex survey data analysis tools in which each city was treated as its own strata. Missing data for those who did not test were imputed HIV+ if they reported testing positive before and were taking antiretroviral therapy. A total of 4176 men were recruited in the 12 cities. The average time to completion was 10.2 weeks. The longest chain length varied from 8 to 21 waves. The sample size was achieved in all but 2 cities. A total of 3958 of the 4176 respondents agreed to test for HIV (90.2%). For results without imputation, 17.5% (95%CI: 14.7–20.7) of our sample was HIV positive. With imputation, 18.4% (95%CI: 15.4–21.7) were seropositive. HIV prevalence increased beyond expectations from the results of the 2009 survey (12.1%; 95%CI: 10.0–14.5) to 18.4%; CI95%: 15.4 to 21.7 in 2016. This increase accompanies Brazil's focus on the treatment to prevention strategy, and a decrease in support for community-based organizations and community prevention programs. PMID:29794604

  13. Los Alamos National Laboratory Science Education Programs. Progress report, October 1, 1994--December 31, 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, D.H.

    During the 1994 summer institute NTEP teachers worked in coordination with LANL and the Los Alamos Middle School and Mountain Elementary School to gain experience in communicating on-line, to gain further information from the Internet and in using electronic Bulletin Board Systems (BBSs) to exchange ideas with other teachers. To build on their telecommunications skills, NTEP teachers participated in the International Telecommunications In Education Conference (Tel*ED `94) at the Albuquerque Convention Center on November 11 & 12, 1994. They attended the multimedia keynote address, various workshops highlighting many aspects of educational telecommunications skills, and the Telecomm Rodeo sponsored by Losmore » Alamos National Laboratory. The Rodeo featured many presentations by Laboratory personnel and educational institutions on ways in which telecommunications technologies can be use din the classroom. Many were of the `hands-on` type, so that teachers were able to try out methods and equipment and evaluate their usefulness in their own schools and classrooms. Some of the presentations featured were the Geonet educational BBS system, the Supercomputing Challenge, and the Sunrise Project, all sponsored by LANL; the `CU-seeMe` live video software, various simulation software packages, networking help, and many other interesting and useful exhibits.« less

  14. [Application of the computer-based respiratory sound analysis system based on Mel-frequency cepstral coefficient and dynamic time warping in healthy children].

    PubMed

    Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z

    2016-08-01

    We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.

  15. Geometric Constraints on Human Speech Sound Inventories

    PubMed Central

    Dunbar, Ewan; Dupoux, Emmanuel

    2016-01-01

    We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296

  16. Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources

    DOEpatents

    Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA

    2007-03-13

    A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  17. System and method for characterizing synthesizing and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2003-01-01

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  18. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C

    2013-05-21

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  19. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2007-10-16

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  20. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    NASA Technical Reports Server (NTRS)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  1. A simple computer-based measurement and analysis system of pulmonary auscultation sounds.

    PubMed

    Polat, Hüseyin; Güler, Inan

    2004-12-01

    Listening to various lung sounds has proven to be an important diagnostic tool for detecting and monitoring certain types of lung diseases. In this study a computer-based system has been designed for easy measurement and analysis of lung sound using the software package DasyLAB. The designed system presents the following features: it is able to digitally record the lung sounds which are captured with an electronic stethoscope plugged to a sound card on a portable computer, display the lung sound waveform for auscultation sites, record the lung sound into the ASCII format, acoustically reproduce the lung sound, edit and print the sound waveforms, display its time-expanded waveform, compute the Fast Fourier Transform (FFT), and display the power spectrum and spectrogram.

  2. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  3. Multichannel sound reinforcement systems at work in a learning environment

    NASA Astrophysics Data System (ADS)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  4. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  5. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    PubMed

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  6. A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene

    PubMed Central

    Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling

    2016-01-01

    A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239

  7. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 of this...

  8. Statistical properties of Chinese phonemic networks

    NASA Astrophysics Data System (ADS)

    Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan

    2011-04-01

    The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.

  9. Tuning the cognitive environment: Sound masking with 'natural' sounds in open-plan offices

    NASA Astrophysics Data System (ADS)

    DeLoach, Alana

    With the gain in popularity of open-plan office design and the engineering efforts to achieve acoustical comfort for building occupants, a majority of workers still report dissatisfaction in their workplace environment. Office acoustics influence organizational effectiveness, efficiency, and satisfaction through meeting appropriate requirements for speech privacy and ambient sound levels. Implementing a sound masking system is one tried-and-true method of achieving privacy goals. Although each sound masking system is tuned for its specific environment, the signal -- random steady state electronic noise, has remained the same for decades. This research work explores how `natural' sounds may be used as an alternative to this standard masking signal employed so ubiquitously in sound masking systems in the contemporary office environment. As an unobtrusive background sound, possessing the appropriate spectral characteristics, this proposed use of `natural' sounds for masking challenges the convention that masking sounds should be as meaningless as possible. Through the pilot study presented in this work, we hypothesize that `natural' sounds as sound maskers will be as effective at masking distracting background noise as the conventional masking sound, will enhance cognitive functioning, and increase participant (worker) satisfaction.

  10. Automated analysis of blood pressure measurements (Korotkov sound)

    NASA Technical Reports Server (NTRS)

    Golden, D. P.; Hoffler, G. W.; Wolthuis, R. A.

    1972-01-01

    Automatic system for noninvasive measurements of arterial blood pressure is described. System uses Korotkov sound processor logic ratios to identify Korotkov sounds. Schematic diagram of system is provided to show components and method of operation.

  11. Second Sound in Systems of One-Dimensional Fermions

    DOE PAGES

    Matveev, K. A.; Andreev, A. V.

    2017-12-27

    We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies the second sound mode is damped, and the propagation of heat is diffusive.

  12. Second Sound in Systems of One-Dimensional Fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matveev, K. A.; Andreev, A. V.

    We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies the second sound mode is damped, and the propagation of heat is diffusive.

  13. Second Sound in Systems of One-Dimensional Fermions

    NASA Astrophysics Data System (ADS)

    Matveev, K. A.; Andreev, A. V.

    2017-12-01

    We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to the ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies, the second sound mode is damped, and the propagation of heat is diffusive.

  14. 40 CFR 205.54-2 - Sound data acquisition system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meets the “fast” dynamic requirement of a precision sound level meter indicating meter system for the... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Sound data acquisition system. 205.54... data acquisition system. (a) Systems employing tape recorders and graphic level recorders may be...

  15. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 5 2012-10-01 2012-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  16. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 5 2013-10-01 2013-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  17. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  18. Intensity-invariant coding in the auditory system.

    PubMed

    Barbour, Dennis L

    2011-11-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    PubMed

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  20. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    PubMed

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  1. Controlling sound radiation through an opening with secondary loudspeakers along its boundaries.

    PubMed

    Wang, Shuping; Tao, Jiancheng; Qiu, Xiaojun

    2017-10-17

    We propose a virtual sound barrier system that blocks sound transmission through openings without affecting access, light and air circulation. The proposed system applies active control technique to cancel sound transmission with a double layered loudspeaker array at the edge of the opening. Unlike traditional transparent glass windows, recently invented double-glazed ventilation windows and planar active sound barriers or any other metamaterials designed to reduce sound transmission, secondary loudspeakers are put only along the boundaries of the opening, which provides the possibility to make it invisible. Simulation and experimental results demonstrate its feasibility for broadband sound control, especially for low frequency sound which is usually hard to attenuate with existing methods.

  2. Development of Virtual Auditory Interfaces

    DTIC Science & Technology

    2001-03-01

    reference to compare the sound in the VE with the real 4. Lessons from the Entertainment Industry world experience. The entertainment industry has...systems are currently being evaluated. even though we have the technology to create astounding The first system uses a portable Sony TCD-D8 DAT audio...data set created a system called "Fantasound" which wrapped the including sound recordings and sound measurements musical compositions and sound

  3. Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.

    PubMed

    Maryn, Youri; Zarowski, Andrzej

    2015-11-01

    Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.

  4. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  5. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  6. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  7. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  8. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  9. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  10. Computer-aided auscultation learning system for nursing technique instruction.

    PubMed

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  11. Possibilities of psychoacoustics to determine sound quality

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.

  12. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    PubMed

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  13. An automated computerized auscultation and diagnostic system for pulmonary diseases.

    PubMed

    Abbas, Ali; Fahim, Atef

    2010-12-01

    Respiratory sounds are of significance as they provide valuable information on the health of the respiratory system. Sounds emanating from the respiratory system are uneven, and vary significantly from one individual to another and for the same individual over time. In and of themselves they are not a direct proof of an ailment, but rather an inference that one exists. Auscultation diagnosis is an art/skill that is acquired and honed by practice; hence it is common to seek confirmation using invasive and potentially harmful imaging diagnosis techniques like X-rays. This research focuses on developing an automated auscultation diagnostic system that overcomes the limitations inherent in traditional auscultation techniques. The system uses a front end sound signal filtering module that uses adaptive Neural Networks (NN) noise cancellation to eliminate spurious sound signals like those from the heart, intestine, and ambient noise. To date, the core diagnosis module is capable of identifying lung sounds from non-lung sounds, normal lung sounds from abnormal ones, and identifying wheezes from crackles as indicators of different ailments.

  14. Hyperspectral Remote Sensing of Atmospheric Profiles from Satellites and Aircraft

    NASA Technical Reports Server (NTRS)

    Smith, W. L.; Zhou, D. K.; Harrison, F. W.; Revercomb, H. E.; Larar, A. M.; Huang, H. L.; Huang, B.

    2001-01-01

    A future hyperspectral resolution remote imaging and sounding system, called the GIFTS (Geostationary Imaging Fourier Transform Spectrometer), is described. An airborne system, which produces the type of hyperspectral resolution sounding data to be achieved with the GIFTS, has been flown on high altitude aircraft. Results from simulations and from the airborne measurements are presented to demonstrate the revolutionary remote sounding capabilities to be realized with future satellite hyperspectral remote imaging/sounding systems.

  15. Development of Prototype of Whistling Sound Counter based on Piezoelectric Bone Conduction

    NASA Astrophysics Data System (ADS)

    Mori, Mikio; Ogihara, Mitsuhiro; Kyuu, Ten; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro

    Recently, some professional whistlers have set up music schools that teach musical whistling. Similar to singing, in musical whistling, the whistling sound should not be break, even when the whistling goes on for more than 3 min. For this, it is advisable to practice whistling the “Pii” sound, which involves whistling the “Pii” sound continuously 100 times with the same pitch. However, when practicing alone, a whistler finds it difficult to count his/her own whistling sounds. In this paper, we propose a whistling sound counter based on piezoelectric bone conduction. This system consists of five parts. The gain of the amplifier section of this counter is variable, and the center frequency (f0) of the BPF part is also variable. In this study, we developed a prototype of the system and tested it. For this, we simultaneously counted the whistling sounds of nine people using the proposed system. The proposed system showed a good performance in a noisy environment. We also propose an examination system for awarding grades in musical whistling, which enforces the license examination in musical whistling on the personal computer. The proposed system can be used to administer the 5th grade exam for musical whistling.

  16. [A focused sound field measurement system by LabVIEW].

    PubMed

    Jiang, Zhan; Bai, Jingfeng; Yu, Ying

    2014-05-01

    In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.

  17. 49 CFR 325.25 - Calibration of measurement systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Standard Institute Standard Methods for Measurements of Sound Pressure Levels (ANSI S1.13-1971) for field... sound level measurement system must be calibrated and appropriately adjusted at one or more frequencies... 5-15 minutes thereafter, until it has been determined that the sound level measurement system has...

  18. 49 CFR 325.25 - Calibration of measurement systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Standard Institute Standard Methods for Measurements of Sound Pressure Levels (ANSI S1.13-1971) for field... sound level measurement system must be calibrated and appropriately adjusted at one or more frequencies... 5-15 minutes thereafter, until it has been determined that the sound level measurement system has...

  19. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...

  20. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...

  1. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...

  2. A Methodology to Objectively Assess the Performance of Sound Field Amplification Systems Demonstrated Using 50 Physical Simulations of Classroom Conditions

    PubMed Central

    Dance, Stephen; Backus, Bradford; Morales, Lorenzo

    2018-01-01

    Introduction: The effect of a sound reinforcement system, in terms of speech intelligibility, has been systematically determined under realistic conditions. Different combinations of ambient and reverberant conditions representative of a classroom environment have been investigated. Materials and Methods: By comparing the measured speech transmission index metric with and without the system in the same space under different room acoustics conditions, it was possible to determine when the system was most effective. A new simple criterion, equivalent noise reduction (ENR), was introduced to determine the effectiveness of the sound reinforcement system which can be used to predict the speech transmission index based on the ambient sound pressure and reverberation time with and without amplification. Results: This criterion had a correlation, R2 > 0.97. It was found that sound reinforcement provided no benefit if the competing noise level was less than 40 dBA. However, the maximum benefit of such a system was equivalent to a 7.7 dBA noise reduction. Conclusion: Using the ENR model, it would be possible to determine the suitability of implementing sound reinforcement systems in any room, thus providing a tool to determine if natural acoustic treatment or sound field amplification would be of most benefit to the occupants of any particular room. PMID:29785972

  3. A methodology to objectively assess the performance of sound field amplification systems demonstrated using 50 physical simulations of classroom conditions.

    PubMed

    Dance, Stephen; Backus, Bradford; Morales, Lorenzo

    2018-01-01

    The effect of a sound reinforcement system, in terms of speech intelligibility, has been systematically determined under realistic conditions. Different combinations of ambient and reverberant conditions representative of a classroom environment have been investigated. By comparing the measured speech transmission index metric with and without the system in the same space under different room acoustics conditions, it was possible to determine when the system was most effective. A new simple criterion, equivalent noise reduction (ENR), was introduced to determine the effectiveness of the sound reinforcement system which can be used to predict the speech transmission index based on the ambient sound pressure and reverberation time with and without amplification. This criterion had a correlation, R 2 > 0.97. It was found that sound reinforcement provided no benefit if the competing noise level was less than 40 dBA. However, the maximum benefit of such a system was equivalent to a 7.7 dBA noise reduction. Using the ENR model, it would be possible to determine the suitability of implementing sound reinforcement systems in any room, thus providing a tool to determine if natural acoustic treatment or sound field amplification would be of most benefit to the occupants of any particular room.

  4. Virtual targeting in three-dimensional space with sound and light interference

    NASA Astrophysics Data System (ADS)

    Chua, Florence B.; DeMarco, Robert M.; Bergen, Michael T.; Short, Kenneth R.; Servatius, Richard J.

    2006-05-01

    Law enforcement and the military are critically concerned with the targeting and firing accuracy of opponents. Stimuli which impede opponent targeting and firing accuracy can be incorporated into defense systems. An automated virtual firing range was developed to assess human targeting accuracy under conditions of sound and light interference, while avoiding dangers associated with live fire. This system has the ability to quantify sound and light interference effects on targeting and firing accuracy in three dimensions. This was achieved by development of a hardware and software system that presents the subject with a sound or light target, preceded by a sound or light interference. SonyXplod. TM 4-way speakers present sound interference and sound targeting. The Martin ® MiniMAC TM Profile operates as a source of light interference, while a red laser light serves as a target. A tracking system was created to monitor toy gun movement and firing in three-dimensional space. Data are collected via the Ascension ® Flock of Birds TM tracking system and a custom National Instrument ® LabVIEW TM 7.0 program to monitor gun movement and firing. A test protocol examined system parameters. Results confirm that the system enables tracking of virtual shots from a fired simulation gun to determine shot accuracy and location in three dimensions.

  5. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    PubMed

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  6. Peripheral mechanisms for vocal production in birds - differences and similarities to human speech and singing.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-10-01

    Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.

  7. Microphone array measurement system for analysis of directional and spatial variations of sound fields.

    PubMed

    Gover, Bradford N; Ryan, James G; Stinson, Michael R

    2002-11-01

    A measurement system has been developed that is capable of analyzing the directional and spatial variations in a reverberant sound field. A spherical, 32-element array of microphones is used to generate a narrow beam that is steered in 60 directions. Using an omnidirectional loudspeaker as excitation, the sound pressure arriving from each steering direction is measured as a function of time, in the form of pressure impulse responses. By subsequent analysis of these responses, the variation of arriving energy with direction is studied. The directional diffusion and directivity index of the arriving sound can be computed, as can the energy decay rate in each direction. An analysis of the 32 microphone responses themselves allows computation of the point-to-point variation of reverberation time and of sound pressure level, as well as the spatial cross-correlation coefficient, over the extent of the array. The system has been validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively from the measures and qualitatively from plots of the arriving energy versus direction. It is anticipated that the system will be of value in evaluating the directional distribution of arriving energy and the degree and diffuseness of sound fields in rooms.

  8. Verification of a Proposed Clinical Electroacoustic Test Protocol for Personal Digital Modulation Receivers Coupled to Cochlear Implant Sound Processors.

    PubMed

    Nair, Erika L; Sousa, Rhonda; Wannagot, Shannon

    Guidelines established by the AAA currently recommend behavioral testing when fitting frequency modulated (FM) systems to individuals with cochlear implants (CIs). A protocol for completing electroacoustic measures has not yet been validated for personal FM systems or digital modulation (DM) systems coupled to CI sound processors. In response, some professionals have used or altered the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting FM systems to CI sound processors. More recently steps were outlined in a proposed protocol. The purpose of this research is to review and compare the electroacoustic test measures outlined in a 2013 article by Schafer and colleagues in the Journal of the American Academy of Audiology titled "A Proposed Electroacoustic Test Protocol for Personal FM Receivers Coupled to Cochlear Implant Sound Processors" to the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting DM systems to CI users. Electroacoustic measures were conducted on 71 CI sound processors and Phonak Roger DM systems using a proposed protocol and an adapted AAA protocol. Phonak's recommended default receiver gain setting was used for each CI sound processor manufacturer and adjusted if necessary to achieve transparency. Electroacoustic measures were conducted on Cochlear and Advanced Bionics (AB) sound processors. In this study, 28 Cochlear Nucleus 5/CP810 sound processors, 26 Cochlear Nucleus 6/CP910 sound processors, and 17 AB Naida CI Q70 sound processors were coupled in various combinations to Phonak Roger DM dedicated receivers (25 Phonak Roger 14 receivers-Cochlear dedicated receiver-and 9 Phonak Roger 17 receivers-AB dedicated receiver) and 20 Phonak Roger Inspiro transmitters. Employing both the AAA and the Schafer et al protocols, electroacoustic measurements were conducted with the Audioscan Verifit in a clinical setting on 71 CI sound processors and Phonak Roger DM systems to determine transparency and verify FM advantage, comparing speech inputs (65 dB SPL) in an effort to achieve equal outputs. If transparency was not achieved at Phonak's recommended default receiver gain, adjustments were made to the receiver gain. The integrity of the signal was monitored with the appropriate manufacturer's monitor earphones. Using the AAA hearing aid protocol, 50 of the 71 CI sound processors achieved transparency, and 59 of the 71 CI sound processors achieved transparency when using the proposed protocol at Phonak's recommended default receiver gain. After the receiver gain was adjusted, 3 of 21 CI sound processors still did not meet transparency using the AAA protocol, and 2 of 12 CI sound processors still did not meet transparency using the Schafer et al proposed protocol. Both protocols were shown to be effective in taking reliable electroacoustic measurements and demonstrate transparency. Both protocols are felt to be clinically feasible and to address the needs of populations that are unable to reliably report regarding the integrity of their personal DM systems. American Academy of Audiology

  9. 49 CFR 325.23 - Type of measurement systems which may be used.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... may be used. The sound level measurement system must meet or exceed the requirements of American National Standard Specification for Sound Level Meters (ANSI S1.4-1971), approved April 27, 1971, issued by..., New York, New York, 10018. (a) A Type 1 sound level meter; (b) A Type 2 sound level meter; or (c) A...

  10. Comparison of sound speed measurements on two different ultrasound tomography devices

    NASA Astrophysics Data System (ADS)

    Sak, Mark; Duric, Neb; Littrup, Peter; Bey-Knight, Lisa; Sherman, Mark; Gierach, Gretchen; Malyarenko, Antonina

    2014-03-01

    Ultrasound tomography (UST) employs sound waves to produce three-dimensional images of breast tissue and precisely measures the attenuation of sound speed secondary to breast tissue composition. High breast density is a strong breast cancer risk factor and sound speed is directly proportional to breast density. UST provides a quantitative measure of breast density based on three-dimensional imaging without compression, thereby overcoming the shortcomings of many other imaging modalities. The quantitative nature of the UST breast density measures are tied to an external standard, so sound speed measurement in breast tissue should be independent of specific hardware. The work presented here compares breast sound speed measurement obtained with two different UST devices. The Computerized Ultrasound Risk Evaluation (CURE) system located at the Karmanos Cancer Institute in Detroit, Michigan was recently replaced with the SoftVue ultrasound tomographic device. Ongoing clinical trials have used images generated from both sets of hardware, so maintaining consistency in sound speed measurements is important. During an overlap period when both systems were in the same exam room, a total of 12 patients had one or both of their breasts imaged on both systems on the same day. There were 22 sound speed scans analyzed from each system and the average breast sound speeds were compared. Images were either reconstructed using saved raw data (for both CURE and SoftVue) or were created during the image acquisition (saved in DICOM format for SoftVue scans only). The sound speed measurements from each system were strongly and positively correlated with each other. The average difference in sound speed between the two sets of data was on the order of 1-2 m/s and this result was not statistically significant. The only sets of images that showed a statistical difference were the DICOM images created during the SoftVue scan compared to the SoftVue images reconstructed from the raw data. However, the discrepancy between the sound speed values could be easily handled by uniformly increasing the DICOM sound speed by approximately 0.5 m/s. These results suggest that there is no fundamental difference in sound speed measurement for the two systems and support combining data generated with these instruments in future studies.

  11. Portable system for auscultation and lung sound analysis.

    PubMed

    Nabiev, Rustam; Glazova, Anna; Olyinik, Valery; Makarenkova, Anastasiia; Makarenkov, Anatolii; Rakhimov, Abdulvosid; Felländer-Tsai, Li

    2014-01-01

    A portable system for auscultation and lung sound analysis has been developed, including the original electronic stethoscope coupled with mobile devices and special algorithms for the automated analysis of pulmonary sound signals. It's planned that the developed system will be used for monitoring of health status of patients with various pulmonary diseases.

  12. The impact of sound-field systems on learning and attention in elementary school classrooms.

    PubMed

    Dockrell, Julie E; Shield, Bridget

    2012-08-01

    The authors evaluated the installation and use of sound-field systems to investigate the impact of these systems on teaching and learning in elementary school classrooms. Methods The evaluation included acoustic surveys of classrooms, questionnaire surveys of students and teachers, and experimental testing of students with and without the use of sound-field systems. In this article, the authors report students' perceptions of classroom environments and objective data evaluating change in performance on cognitive and academic assessments with amplification over a 6-month period. Teachers were positive about the use of sound-field systems in improving children's listening and attention to verbal instructions. Over time, students in amplified classrooms did not differ from those in nonamplified classrooms in their reports of listening conditions, nor did their performance differ in measures of numeracy, reading, or spelling. Use of sound-field systems in the classrooms resulted in significantly larger gains in performance in the number of correct items on the nonverbal measure of speed of processing and the measure of listening comprehension. Analysis controlling for classroom acoustics indicated that students' listening comprehension scores improved significantly in amplified classrooms with poorer acoustics but not in amplified classrooms with better acoustics. Both teacher ratings and student performance on standardized tests indicated that sound-field systems improved performance on children's understanding of spoken language. However, academic attainments showed no benefits from the use of sound-field systems. Classroom acoustics were a significant factor influencing the efficacy of sound-field systems; children in classes with poorer acoustics benefited in listening comprehension, whereas there was no additional benefit for children in classrooms with better acoustics.

  13. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    PubMed Central

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  14. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  15. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.

  16. Simulation of prenatal maternal sounds in NICU incubators: a pilot safety and feasibility study.

    PubMed

    Panagiotidis, John; Lahav, Amir

    2010-10-01

    This pilot study evaluated the safety and feasibility of an innovative audio system for transmitting maternal sounds to NICU incubators. A sample of biological sounds, consisting of voice and heartbeat, were recorded from a mother of a premature infant admitted to our unit. The maternal sounds were then played back inside an unoccupied incubator via a specialized audio system originated and compiled in our lab. We performed a series of evaluations to determine the safety and feasibility of using this system in NICU incubators. The proposed audio system was found to be safe and feasible, meeting criteria for humidity and temperature resistance, as well as for safe noise levels. Simulation of maternal sounds using this system seems achievable and applicable and received local support from medical staff. Further research and technology developments are needed to optimize the design of the NICU incubators to preserve the acoustic environment of the womb.

  17. Utilizing the Cyberforest live sound system with social media to remotely conduct woodland bird censuses in Central Japan.

    PubMed

    Saito, Kaoru; Nakamura, Kazuhiko; Ueta, Mutsuyuki; Kurosawa, Reiko; Fujiwara, Akio; Kobayashi, Hill Hiroki; Nakayama, Masaya; Toko, Ayako; Nagahama, Kazuyo

    2015-11-01

    We have developed a system that streams and archives live sound from remote areas across Japan via an unmanned automatic camera. The system was used to carry out pilot bird censuses in woodland; this allowed us to examine the use of live sound transmission and the role of social media as a mediator in remote scientific monitoring. The system has been streaming sounds 8 h per day for more than five years. We demonstrated that: (1) the transmission of live sound from a remote woodland could be used effectively to monitor birds in a remote location; (2) the simultaneous involvement of several participants via Internet Relay Chat to listen to live sound transmissions could enhance the accuracy of census data collection; and (3) interactions through Twitter allowed members of the public to engage or help with the remote monitoring of birds and experience inaccessible nature through the use of novel technologies.

  18. Low-cost compact ECG with graphic LCD and phonocardiogram system design.

    PubMed

    Kara, Sadik; Kemaloğlu, Semra; Kirbaş, Samil

    2006-06-01

    Till today, many different ECG devices are made in developing countries. In this study, low cost, small size, portable LCD screen ECG device, and phonocardiograph were designed. With designed system, heart sounds that take synchronously with ECG signal are heard as sensitive. Improved system consist three units; Unit 1, ECG circuit, filter and amplifier structure. Unit 2, heart sound acquisition circuit. Unit 3, microcontroller, graphic LCD and ECG signal sending unit to computer. Our system can be used easily in different departments of the hospital, health institution and clinics, village clinic and also in houses because of its small size structure and other benefits. In this way, it is possible that to see ECG signal and hear heart sounds as synchronously and sensitively. In conclusion, heart sounds are heard on the part of both doctor and patient because sounds are given to environment with a tiny speaker. Thus, the patient knows and hears heart sounds him/herself and is acquainted by doctor about healthy condition.

  19. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  20. 49 CFR Appendix E to Part 222 - Requirements for Wayside Horns

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., indicates that the system is not operating as intended; 4. Horn system must provide a minimum sound level of... locomotive engineer to sound the locomotive horn for at least 15 seconds prior to arrival at the crossing in...; 5. Horn system must sound at a minimum of 15 seconds prior to the train's arrival at the crossing...

  1. 49 CFR Appendix E to Part 222 - Requirements for Wayside Horns

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., indicates that the system is not operating as intended; 4. Horn system must provide a minimum sound level of... locomotive engineer to sound the locomotive horn for at least 15 seconds prior to arrival at the crossing in...; 5. Horn system must sound at a minimum of 15 seconds prior to the train's arrival at the crossing...

  2. Second sound tracking system

    NASA Astrophysics Data System (ADS)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  3. Recognition of Isolated Non-Speech Sounds.

    DTIC Science & Technology

    1987-05-31

    stapler could be presented within a set of paper shuffling sounds and within a set of sounds characteristic of entering a room. The former context...should act in a top down manner to suggest a stapler event for the sound whereas the latter context will suggest that a light has been switched on. Such...Moulton Street Cambridge, MA 02238 " Department of the Army Or. William B. Rouse School of Industrial and Systems Director, Organizations and Systems

  4. 50 CFR 27.71 - Motion or sound pictures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...

  5. 50 CFR 27.71 - Motion or sound pictures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 9 2012-10-01 2012-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...

  6. 50 CFR 27.71 - Motion or sound pictures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...

  7. Development and use of a spherical microphone array for measurement of spatial properties of reverberant sound fields

    NASA Astrophysics Data System (ADS)

    Gover, Bradford Noel

    The problem of hands-free speech pick-up is introduced, and it is identified how details of the spatial properties of the reverberant field may be useful for enhanced design of microphone arrays. From this motivation, a broadly-applicable measurement system has been developed for the analysis of the directional and spatial variations in reverberant sound fields. Two spherical, 32-element arrays of microphones are used to generate narrow beams over two different frequency ranges, together covering 300--3300 Hz. Using an omnidirectional loudspeaker as excitation in a room, the pressure impulse response in each of 60 steering directions is measured. Through analysis of these responses, the variation of arriving energy with direction is studied. The system was first validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively through numerical descriptors and qualitatively from plots of the arriving energy versus direction. The system was then used to measure the sound fields in several actual rooms. Through both qualitative and quantitative output, these sound fields were seen to be highly anisotropic, influenced greatly by the direct sound and early-arriving reflections. Furthermore, the rate of sound decay was not independent of direction, sound being absorbed more rapidly in some directions than in others. These results are discussed in the context of the original motivation, and methods for their application to enhanced speech pick-up using microphone arrays are proposed.

  8. 49 CFR 210.25 - Measurement criteria and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... American National Standard Institute Standards, “Method for Measurement of Sound Pressure Levels,” (ANSI S1... measurement indicating a violation. (ii) The sound level measurement system shall be checked not less than... calibrator of the microphone coupler type designed for the sound level measurement system in use shall be...

  9. Yes, You Can Learn Foreign Language Pronunciation by Sight!

    ERIC Educational Resources Information Center

    Richmond, Edmun B.; And Others

    1979-01-01

    Describes the Envelope Vowel Approximation System (EVAS), a foreign language pronunciation learning system which allows students to see as well as hear a pedagogical model of a sound, and to compare their own utterances of that sound to the model as they pronounce the same sound. (Author/CMV)

  10. Development of sound measurement systems for auditory functional magnetic resonance imaging.

    PubMed

    Nam, Eui-Cheol; Kim, Sam Soo; Lee, Kang Uk; Kim, Sang Sik

    2008-06-01

    Auditory functional magnetic resonance imaging (fMRI) requires quantification of sound stimuli in the magnetic environment and adequate isolation of background noise. We report the development of two novel sound measurement systems that accurately measure the sound intensity inside the ear, which can simultaneously provide the similar or greater amount of scanner- noise protection than ear-muffs. First, we placed a 2.6 x 2.6-mm microphone in an insert phone that was connected to a headphone [microphone-integrated, foam-tipped insert-phone with a headphone (MIHP)]. This attenuated scanner noise by 37.8+/-4.6 dB, a level better than the reference amount obtained using earmuffs. The nonmetallic optical microphone was integrated with a headphone [optical microphone in a headphone (OMHP)] and it effectively detected the change of sound intensity caused by variable compression on the cushions of the headphone. Wearing the OMHP reduced the noise by 28.5+/-5.9 dB and did not affect echoplanar magnetic resonance images. We also performed an auditory fMRI study using the MIHP system and presented increase in the auditory cortical activation following 10-dB increment in the intensity of sound stimulation. These two newly developed sound measurement systems successfully achieved the accurate quantification of sound stimuli with maintaining the similar level of noise protection of wearing earmuffs in the auditory fMRI experiment.

  11. Real time sound analysis for medical remote monitoring.

    PubMed

    Istrate, Dan; Binet, Morgan; Cheng, Sreng

    2008-01-01

    The increase of aging population in Europe involves more people living alone at home with an increased risk of home accidents or falls. In order to prevent or detect a distress situation in the case of an elderly people living alone, a remote monitoring system based on the sound environment analysis can be used. We have already proposed a system which monitors the sound environment, identifies everyday life sounds and distress expressions in order to participate to an alarm decision. This first system uses a classical sound card on a PC or embedded PC allowing only one channel monitor. In this paper, we propose a new architecture of the remote monitoring system, which relies on a real time multichannel implementation based on an USB acquisition card. This structure allows monitoring eight channels in order to cover all the rooms of an apartment. More than that, the SNR estimation leads currently to the adaptation of the recognition models to environment.

  12. High Definition Sounding System Test and Integration with NASA Atmospheric Science Program Aircraft

    DTIC Science & Technology

    2013-09-30

    of the High Definition Sounding System (HDSS) on NASA high altitude Airborne Science Program platforms, specifically the NASA P-3 and NASA WB-57. When...demonstrate the system reliability in a Global Hawk’s 62000’ altitude regime of thin air and very cold temperatures. APPROACH: Mission Profile One or more WB...57 test flights will prove airworthiness and verify the High Definition Sounding System (HDSS) is safe and functional at high altitudes , essentially

  13. Limited receptive area neural classifier for recognition of swallowing sounds using continuous wavelet transform.

    PubMed

    Makeyev, Oleksandr; Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Ed; Neuman, Michael

    2007-01-01

    In this paper we propose a sound recognition technique based on the limited receptive area (LIRA) neural classifier and continuous wavelet transform (CWT). LIRA neural classifier was developed as a multipurpose image recognition system. Previous tests of LIRA demonstrated good results in different image recognition tasks including: handwritten digit recognition, face recognition, metal surface texture recognition, and micro work piece shape recognition. We propose a sound recognition technique where scalograms of sound instances serve as inputs of the LIRA neural classifier. The methodology was tested in recognition of swallowing sounds. Swallowing sound recognition may be employed in systems for automated swallowing assessment and diagnosis of swallowing disorders. The experimental results suggest high efficiency and reliability of the proposed approach.

  14. Experimental Simulation of Active Control With On-line System Identification on Sound Transmission Through an Elastic Plate

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An adaptive control algorithm with on-line system identification capability has been developed. One of the great advantages of this scheme is that an additional system identification mechanism such as an additional uncorrelated random signal generator as the source of system identification is not required. A time-varying plate-cavity system is used to demonstrate the control performance of this algorithm. The time-varying system consists of a stainless-steel plate which is bolted down on a rigid cavity opening where the cavity depth was changed with respect to time. For a given externally located harmonic sound excitation, the system identification and the control are simultaneously executed to minimize the transmitted sound in the cavity. The control performance of the algorithm is examined for two cases. First, all the water was drained, the external disturbance frequency is swept with 1 Hz/sec. The result shows an excellent frequency tracking capability with cavity internal sound suppression of 40 dB. For the second case, the water level is initially empty and then raised to 3/20 full in 60 seconds while the external sound excitation is fixed with a frequency. Hence, the cavity resonant frequency decreases and passes the external sound excitation frequency. The algorithm shows 40 dB transmitted noise suppression without compromising the system identification tracking capability.

  15. On the Possible Detection of Lightning Storms by Elephants

    PubMed Central

    Kelley, Michael C.; Garstang, Michael

    2013-01-01

    Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406

  16. Synthesis of Systemic Functional Theory & Dynamical Systems Theory for Socio-Cultural Modeling

    DTIC Science & Technology

    2011-01-26

    is, language and other resources (e.g. images and sound resources) are conceptualised as inter-locking systems of meaning which realise four...hierarchical ranks and strata (e.g. sounds, word groups, clauses, and complex discourse structures in language, and elements, figures and episodes in images ...integrating platform for describing how language and other resources (e.g. images and sound) work together to fulfil particular objectives. While

  17. An Intelligent Pattern Recognition System Based on Neural Network and Wavelet Decomposition for Interpretation of Heart Sounds

    DTIC Science & Technology

    2001-10-25

    wavelet decomposition of signals and classification using neural network. Inputs to the system are the heart sound signals acquired by a stethoscope in a...Proceedings. pp. 415–418, 1990. [3] G. Ergun, “An intelligent diagnostic system for interpretation of arterpartum fetal heart rate tracings based on ANNs and...AN INTELLIGENT PATTERN RECOGNITION SYSTEM BASED ON NEURAL NETWORK AND WAVELET DECOMPOSITION FOR INTERPRETATION OF HEART SOUNDS I. TURKOGLU1, A

  18. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.

  19. Sound source measurement by using a passive sound insulation and a statistical approach

    NASA Astrophysics Data System (ADS)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  20. Physics and Psychophysics of High-Fidelity Sound. Part III: The Components of a Sound-Reproducing System: Amplifiers and Loudspeakers.

    ERIC Educational Resources Information Center

    Rossing, Thomas D.

    1980-01-01

    Described are the components for a high-fidelity sound-reproducing system which focuses on various program sources, the amplifier, and loudspeakers. Discussed in detail are amplifier power and distortion, air suspension, loudspeaker baffles and enclosures, bass-reflex enclosure, drone cones, rear horn and acoustic labyrinth enclosures, horn…

  1. A neurally inspired musical instrument classification system based upon the sound onset.

    PubMed

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  2. Single brand, fully-covered, self-expandable metal stent for the treatment of benign biliary disease: when should stents be removed?

    PubMed

    Mangiavillano, Benedetto; Khashab, Mouen A; Eusebi, Leonardo H; Tarantino, Ilaria; Bianchetti, Mario; Semeraro, Rossella; Pellicano, Rinaldo; Traina, Mario; Repici, Alessandro

    2018-05-31

    The two most relevant endoscopic treatable benign biliary diseases (BBD) are benign biliary strictures (BBSs) and biliary leaks (BLs), often associated with high morbidity. The most common endoscopic treatment for biliary strictures involves placement of multiple plastic stents (PSs), with or without balloon dilation, followed by planned exchange of the stents. Thus, there continues to be high interest in pursuing alternative endoscopic approaches that may achieve better results with fewer interventions. In this setting, the use of a fully-covered, self-expandable metal stent (FCSEMS) is an attractive alternative to single or multiple PSs for the treatment of BBDs. A single metal stent can remain in place for a longer period of time before removal; however, the maximum time the stent can be remain in place is still not well defined. The aim of this review is to determine the removal time of the TaeWoong® FCSEMS, placed for BBD. According to our data analysis, considering the absence of loss of the covering of the FCSEMS and of any adverse events during and after stent removal, leaving the TaeWoong medical FCSEMS in situ for an 8 months' period seems to be acceptable for benign biliary diseases. Further studies need to evaluate their removability at 1 year.

  3. Comparisons of MRI images, and auditory-related and vocal-related protein expressions in the brain of echolocation bats and rodents.

    PubMed

    Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun

    2016-08-17

    Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior.

  4. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  5. Vocal Imitations of Non-Vocal Sounds

    PubMed Central

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations. PMID:27992480

  6. The Specificity of Sound Symbolic Correspondences in Spoken Language

    ERIC Educational Resources Information Center

    Tzeng, Christina Y.; Nygaard, Lynne C.; Namy, Laura L.

    2017-01-01

    Although language has long been regarded as a primarily arbitrary system, "sound symbolism," or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these…

  7. 33 CFR 161.60 - Vessel Traffic Service Prince William Sound.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...

  8. 33 CFR 161.60 - Vessel Traffic Service Prince William Sound.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...

  9. 33 CFR 161.60 - Vessel Traffic Service Prince William Sound.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...

  10. 33 CFR 161.60 - Vessel Traffic Service Prince William Sound.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...

  11. Noise Attenuation Performance Assessment of the Joint Helmet Mounted Cueing System (JHMCS)

    DTIC Science & Technology

    2010-08-01

    Flash Drive (CFD) memory (Figure 9) and Sound Professionals SP-TFB-2 Miniature Binaural Microphones with the Sound Professionals SP-SPSB-1 Slim-line...flight noise. Sound Professionals binaural microphones were placed to record both internal and external sounds. One microphone was attached to the

  12. 33 CFR 161.60 - Vessel Traffic Service Prince William Sound.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... William Sound. 161.60 Section 161.60 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Movement Reporting System Areas and Reporting Points § 161.60 Vessel Traffic Service Prince William Sound... Cape Hinchinbrook Light to Schooner Rock Light, comprising that portion of Prince William Sound between...

  13. Analysis of sound absorption performance of an electroacoustic absorber using a vented enclosure

    NASA Astrophysics Data System (ADS)

    Cho, Youngeun; Wang, Semyung; Hyun, Jaeyub; Oh, Seungjae; Goo, Seongyeol

    2018-03-01

    The sound absorption performance of an electroacoustic absorber (EA) is primarily influenced by the dynamic characteristics of the loudspeaker that acts as the actuator of the EA system. Therefore, the sound absorption performance of the EA is maximum at the resonance frequency of the loudspeaker and tends to degrade in the low-frequency and high-frequency bands based on this resonance frequency. In this study, to adjust the sound absorption performance of the EA system in the low-frequency band of approximately 20-80 Hz, an EA system using a vented enclosure that has previously been used to enhance the radiating sound pressure of a loudspeaker in the low-frequency band, is proposed. To verify the usefulness of the proposed system, two acoustic environments are considered. In the first acoustic environment, the vent of the vented enclosure is connected to an external sound field that is distinct from the sound field coupled to the EA. In this case, the acoustic effect of the vented enclosure on the performance of the EA is analyzed through an analytical approach using dynamic equations and an impedance-based equivalent circuit. Then, it is verified through numerical and experimental approaches. Next, in the second acoustic environment, the vent is connected to the same external sound field as the EA. In this case, the effect of the vented enclosure on the EA is investigated through an analytical approach and finally verified through a numerical approach. As a result, it is confirmed that the characteristics of the sound absorption performances of the proposed EA system using the vented enclosure in the two acoustic environments considered in this study are different from each other in the low-frequency band of approximately 20-80 Hz. Furthermore, several case studies on the change tendency of the performance of the EA using the vented enclosure according to the critical design factors or vent number for the vented enclosure are also investigated. In the future, even if the proposed EA system using a vented enclosure is extended to a large number of arrays required for 3D sound field control, it is expected to be an attractive solution that can contribute to an improvement in low-frequency noise reduction without causing economic and system complexity problems.

  14. Evaluation of auto incident recording system (AIRS).

    DOT National Transportation Integrated Search

    2005-05-01

    The Auto Incident Recording System (AIRS) is a sound-actuated video recording system. It automatically records potential incidents when activated by sound (horns, clashing metal, squealing tires, etc.). The purpose is to detect patterns of crashes at...

  15. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  16. Active/Passive Control of Sound Radiation from Panels using Constrained Layer Damping

    NASA Technical Reports Server (NTRS)

    Gibbs, Gary P.; Cabell, Randolph H.

    2003-01-01

    A hybrid passive/active noise control system utilizing constrained layer damping and model predictive feedback control is presented. This system is used to control the sound radiation of panels due to broadband disturbances. To facilitate the hybrid system design, a methodology for placement of constrained layer damping which targets selected modes based on their relative radiated sound power is developed. The placement methodology is utilized to determine two constrained layer damping configurations for experimental evaluation of a hybrid system. The first configuration targets the (4,1) panel mode which is not controllable by the piezoelectric control actuator, and the (2,3) and (5,2) panel modes. The second configuration targets the (1,1) and (3,1) modes. The experimental results demonstrate the improved reduction of radiated sound power using the hybrid passive/active control system as compared to the active control system alone.

  17. Loudness of steady sounds - A new theory

    NASA Technical Reports Server (NTRS)

    Howes, W. L.

    1979-01-01

    A new mathematical theory for calculating the loudness of steady sounds from power summation and frequency interaction, based on psychoacoustic and physiological information, assuems that loudness is a subjective measure of the electrical energy transmitted along the auditory nerve to the central nervous system. The auditory system consists of the mechanical part modeled by a bandpass filter with a transfer function dependent on the sound pressure, and the electrical part where the signal is transformed into a half-wave reproduction represented by the electrical power in impulsive discharges transmitted along neurons comprising the auditory nerve. In the electrical part the neurons are distributed among artificial parallel channels with frequency bandwidths equal to 'critical bandwidths for loudness', within which loudness is constant for constant sound pressure. The total energy transmitted to the central nervous system is the sum of the energy transmitted in all channels, and the loudness is proportional to the square root of the total filtered sound energy distributed over all channels. The theory explains many psychoacoustic phenomena such as audible beats resulting from closely spaced tones, interaction of sound stimuli which affect the same neurons affecting loudness, and of individually subliminal sounds becoming audible if they lie within the same critical band.

  18. Experiments in Sound and Structural Vibrations Using an Air-Analog Model Ducted Propulsion System

    DTIC Science & Technology

    2007-08-01

    Department of Aerospace S~and Mechanical Engineering I 20070904056 I EXPERIMENTS IN SOUND AND STRUCTURAL VIBRATIONS USING AN AIR -ANALOG MODEL DUCTED...SOUND AND STRUCTURAL * VIBRATIONS USING AN AIR -ANALOG MODEL DUCTED PROPULSION SYSTEM FINAL TECHNICAL REPORT Prepared by: Scott C. Morris Assistant...Vibration Using Air - 5b. GRANT NUMBER Analog Model Ducted Propulsion Systems N00014-1-0522 5C. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER

  19. Cross-Polarization Optical Coherence Tomography with Active Maintenance of the Circular Polarization of a Sounding Wave in a Common Path System

    NASA Astrophysics Data System (ADS)

    Gelikonov, V. M.; Romashov, V. N.; Shabanov, D. V.; Ksenofontov, S. Yu.; Terpelov, D. A.; Shilyagin, P. A.; Gelikonov, G. V.; Vitkin, I. A.

    2018-05-01

    We consider a cross-polarization optical coherence tomography system with a common path for the sounding and reference waves and active maintenance of the circular polarization of a sounding wave. The system is based on the formation of birefringent characteristics of the total optical path, which are equivalent to a quarter-wave plate with a 45° orientation of its optical axes with respect to the linearly polarized reference wave. Conditions under which any light-polarization state can be obtained using a two-element phase controller are obtained. The dependence of the local cross-scattering coefficient of light in a model medium and biological tissue on the sounding-wave polarization state is demonstrated. The necessity of active maintenance of the circular polarization of a sounding wave in this common path system (including a flexible probe) is shown to realize uniform optimal conditions for cross-polarization studies of biological tissue.

  20. Propagation of sound through the Earth's atmosphere. 1: Measurement of sound absorption in the air. 2: Measurement of ground impedance

    NASA Technical Reports Server (NTRS)

    Becher, J.; Meredith, R. W.; Zuckerwar, A. J.

    1981-01-01

    The fabrication of parts for the acoustic ground impedance meter was completed, and the instrument tested. Acoustic ground impedance meter, automatic data processing system, cooling system for the resonant tube, and final results of sound absorption in N2-H2O gas mixtures at elevated temperatures are described.

  1. Inside-in, alternative paradigms for sound spatialization

    NASA Astrophysics Data System (ADS)

    Bahn, Curtis; Moore, Stephan

    2003-04-01

    Arrays of widely spaced mono-directional loudspeakers (P.A.-style stereo configurations or ``outside-in'' surround-sound systems) have long provided the dominant paradigms for electronic sound diffusion. So prevalent are these models that alternatives have largely been ignored and electronic sound, regardless of musical aesthetic, has come to be inseparably associated with single-channel speakers, or headphones. We recognize the value of these familiar paradigms, but believe that electronic sound can and should have many alternative, idiosyncratic voices. Through the design and construction of unique sound diffusion structures, one can reinvent the nature of electronic sound; when allied with new sensor technologies, these structures offer alternative modes of interaction with techniques of sonic computation. This paper describes several recent applications of spherical speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays (SenSAs: combinations of various sensor devices with outward-radiating multi-channel speaker arrays). This presentation introduces the development of four generations of spherical speakers-over a hundred individual speakers of various configurations-and their use in many different musical situations including live performance, recording, and sound installation. We describe the design and construction of these systems, and, more generally, the new ``voices'' they give to electronic sound.

  2. Device for recording the 20 Hz - 200 KHz sound frequency spectrum using teletransmission

    NASA Technical Reports Server (NTRS)

    Baciu, I.

    1974-01-01

    The device described consists of two distinct parts: (1) The sound pickup system consisting of the wide-frequency band condenser microphone which contains in the same assembly the frequency-modulated oscillator and the output stage. Being transistorized and small, this system can be easily moved, so that sounds can be picked up even in places that are difficult to reach with larger devices. (2) The receiving and recording part is separate and can be at a great distance from the sound pickup system. This part contains a 72 MHz input stage, a frequency changer that gives an intermediate frequency of 30 MHz and a multichannel analyzer coupled to an oscilloscope and a recorder.

  3. Development of a Novel Noise Delivery System for JP-8 Ototoxicity Studies

    DTIC Science & Technology

    2010-03-01

    kHz. The authors indicated that at the time of the article they were not aware of any other studies that investigated simultaneous exposure to...Sound Study.” 4.4 The audiology sound program will be loaded which allows creation of the correct sound levels. 5.0 CREATING SOUND AND COLLECTING

  4. 75 FR 56873 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-17

    ..., Intercollegiate Broadcasting System, Inc. (``IBS'') and SoundExchange, Inc. (``SoundExchange'') presented... received one comment from IBS. The Final Rule for the minimum fee to be paid by Commercial Webcasters was published. 75 FR 6097 (February 8, 2010). Following the filing of Written Direct Statements by IBS and Sound...

  5. Interferometric imaging of acoustical phenomena using high-speed polarization camera and 4-step parallel phase-shifting technique

    NASA Astrophysics Data System (ADS)

    Ishikawa, K.; Yatabe, K.; Ikeda, Y.; Oikawa, Y.; Onuma, T.; Niwa, H.; Yoshii, M.

    2017-02-01

    Imaging of sound aids the understanding of the acoustical phenomena such as propagation, reflection, and diffraction, which is strongly required for various acoustical applications. The imaging of sound is commonly done by using a microphone array, whereas optical methods have recently been interested due to its contactless nature. The optical measurement of sound utilizes the phase modulation of light caused by sound. Since light propagated through a sound field changes its phase as proportional to the sound pressure, optical phase measurement technique can be used for the sound measurement. Several methods including laser Doppler vibrometry and Schlieren method have been proposed for that purpose. However, the sensitivities of the methods become lower as a frequency of sound decreases. In contrast, since the sensitivities of the phase-shifting technique do not depend on the frequencies of sounds, that technique is suitable for the imaging of sounds in the low-frequency range. The principle of imaging of sound using parallel phase-shifting interferometry was reported by the authors (K. Ishikawa et al., Optics Express, 2016). The measurement system consists of a high-speed polarization camera made by Photron Ltd., and a polarization interferometer. This paper reviews the principle briefly and demonstrates the high-speed imaging of acoustical phenomena. The results suggest that the proposed system can be applied to various industrial problems in acoustical engineering.

  6. Responses of auditory-cortex neurons to structural features of natural sounds.

    PubMed

    Nelken, I; Rotman, Y; Bar Yosef, O

    1999-01-14

    Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.

  7. Improved Blackbody Temperature Sensors for a Vacuum Furnace

    NASA Technical Reports Server (NTRS)

    Farmer, Jeff; Coppens, Chris; O'Dell, J. Scott; McKechnie, Timothy N.; Schofield, Elizabeth

    2009-01-01

    Some improvements have been made in the design and fabrication of blackbody sensors (BBSs) used to measure the temperature of a heater core in a vacuum furnace. Each BBS consists of a ring of thermally conductive, high-melting-temperature material with two tantalum-sheathed thermocouples attached at diametrically opposite points. The name "blackbody sensor" reflects the basic principle of operation. Heat is transferred between the ring and the furnace heater core primarily by blackbody radiation, heat is conducted through the ring to the thermocouples, and the temperature of the ring (and, hence, the temperature of the heater core) is measured by use of the thermocouples. Two main requirements have guided the development of these BBSs: (1) The rings should have as high an emissivity as possible in order to maximize the heat-transfer rate and thereby maximize temperature-monitoring performance and (2) the thermocouples must be joined to the rings in such a way as to ensure long-term, reliable intimate thermal contact. The problem of fabricating a BBS to satisfy these requirements is complicated by an application-specific prohibition against overheating and thereby damaging nearby instrumentation leads through the use of conventional furnace brazing or any other technique that involves heating the entire BBS and its surroundings. The problem is further complicated by another application-specific prohibition against damaging the thin tantalum thermocouple sheaths through the use of conventional welding to join the thermocouples to the ring. The first BBS rings were made of graphite. The tantalum-sheathed thermocouples were attached to the graphite rings by use of high-temperature graphite cements. The ring/thermocouple bonds thus formed were found to be weak and unreliable, and so graphite rings and graphite cements were abandoned. Now, each BBS ring is made from one of two materials: either tantalum or a molybdenum/titanium/zirconium alloy. The tantalum-sheathed thermocouples are bonded to the ring by laser brazing. The primary advantage of laser brazing over furnace brazing is that in laser brazing, it is possible to form a brazed connection locally, without heating nearby parts to the flow temperature of the brazing material. Hence, it is possible to comply with the prohibition against overheating nearby instrumentation leads. Also, in laser brazing, unlike in furnace brazing, it is possible to exert control over the thermal energy to such a high degree that it becomes possible to braze the thermocouples to the ring without burning through the thin tantalum sheaths on the thermocouples. The brazing material used in the laser brazing process is a titanium-boron paste. This brazing material can withstand use at temperatures up to about 1,400 C. In thermal-cycling tests performed thus far, no debonding between the rings and thermocouples has been observed. Emissivity coatings about 0.001 in. (.0.025 mm) thick applied to the interior surfaces of the rings have been found to improve the performance of the BBS sensors by raising the apparent emissivities of the rings. In thermal-cycling tests, the coatings were found to adhere well to the rings.

  8. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    NASA Astrophysics Data System (ADS)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  9. System and method to determine thermophysical properties of a multi-component gas

    DOEpatents

    Morrow, Thomas B.; Behring, II, Kendricks A.

    2003-08-05

    A system and method to characterize natural gas hydrocarbons using a single inferential property, such as standard sound speed, when the concentrations of the diluent gases (e.g., carbon dioxide and nitrogen) are known. The system to determine a thermophysical property of a gas having a first plurality of components comprises a sound velocity measurement device, a concentration measurement device, and a processor to determine a thermophysical property as a function of a correlation between the thermophysical property, the speed of sound, and the concentration measurements, wherein the number of concentration measurements is less than the number of components in the gas. The method includes the steps of determining the speed of sound in the gas, determining a plurality of gas component concentrations in the gas, and determining the thermophysical property as a function of a correlation between the thermophysical property, the speed of sound, and the plurality of concentrations.

  10. Auditory Cortical Processing in Real-World Listening: The Auditory System Going Real

    PubMed Central

    Bizley, Jennifer; Shamma, Shihab A.; Wang, Xiaoqin

    2014-01-01

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. PMID:25392481

  11. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  12. Auditory cortical processing in real-world listening: the auditory system going real.

    PubMed

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  13. The influence of acoustic emissions for underwater data transmission on the behaviour of harbour porpoises (Phocoena phocoena) in a floating pen.

    PubMed

    Kastelein, R A; Verboom, W C; Muijsers, M; Jennings, N V; van der Heul, S

    2005-05-01

    To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network is currently under development: Acoustic Communication network for Monitoring of underwater Environment in coastal areas (ACME). Marine mammals might be affected by ACME sounds since they use sounds of similar frequencies (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour porpoise. Therefore, as part of an environmental impact assessment program, two captive harbour porpoises were subjected to four sounds, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' positions and respiration rates during a test period with those during a baseline period. Each of the four sounds could be made a deterrent by increasing the amplitude of the sound. The porpoises reacted by swimming away from the sounds and by slightly, but significantly, increasing their respiration rate. From the sound pressure level distribution in the pen, and the distribution of the animals during test sessions, discomfort sound level thresholds were determined for each sound. In combination with information on sound propagation in the areas where the communication system may be deployed, the extent of the 'discomfort zone' can be estimated for several source levels (SLs). The discomfort zone is defined as the area around a sound source that harbour porpoises are expected to avoid. Based on these results, SLs can be selected that have an acceptable effect on harbour porpoises in particular areas. The discomfort zone of a communication sound depends on the selected sound, the selected SL, and the propagation characteristics of the area in which the sound system is operational. In shallow, winding coastal water courses, with sandbanks, etc., the type of habitat in which the ACME sounds will be produced, propagation loss cannot be accurately estimated by using a simple propagation model, but should be measured on site. The SL of the communication system should be adapted to each area (taking into account bounding conditions created by narrow channels, sound propagation variability due to environmental factors, and the importance of an area to the affected species). The discomfort zone should not prevent harbour porpoises from spending sufficient time in ecologically important areas (for instance feeding areas), or routes towards these areas.

  14. Verification of the Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia

    DTIC Science & Technology

    1989-07-01

    TECHNICAL REPORT HL-89-14 VERIFICATION OF THE HYDRODYNAMIC AND Si SEDIMENT TRANSPORT HYBRID MODELING SYSTEM FOR CUMBERLAND SOUND AND I’) KINGS BAY...Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia 12 PERSONAL AUTHOR(S) Granat...Hydrodynamic results from RMA-2V were used in the numerical sediment transport code STUDH in modeling the interaction of the flow transport and

  15. Cutting sound enhancement system for mining machines

    DOEpatents

    Leigh, Michael C.; Kwitowski, August J.

    1992-01-01

    A cutting sound enhancement system (10) for transmitting an audible signal from the cutting head (101) of a piece of mine machinery (100) to an operator at a remote station (200), wherein, the operator using a headphone unit (14) can monitor the difference in sounds being made solely by the cutting head (101) to determine the location of the roof, floor, and walls of a coal seam (50).

  16. MO-FG-BRA-02: A Feasibility Study of Integrating Breathing Audio Signal with Surface Surrogates for Respiratory Motion Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Y; Zhu, X; Zheng, D

    Purpose: Tracking the surrogate placed on patient skin surface sometimes leads to problematic signals for certain patients, such as shallow breathers. This in turn impairs the 4D CT image quality and dosimetric accuracy. In this pilot study, we explored the feasibility of monitoring human breathing motion by integrating breathing sound signal with surface surrogates. Methods: The breathing sound signals were acquired though a microphone attached adjacently to volunteer’s nostrils, and breathing curve were analyzed using a low pass filter. Simultaneously, the Real-time Position Management™ (RPM) system from Varian were employed on a volunteer to monitor respiratory motion including both shallowmore » and deep breath modes. The similar experiment was performed by using Calypso system, and three beacons taped on volunteer abdominal region to capture breath motion. The period of each breathing curves were calculated with autocorrelation functions. The coherence and consistency between breathing signals using different acquisition methods were examined. Results: Clear breathing patterns were revealed by the sound signal which was coherent with the signal obtained from both the RPM system and Calypso system. For shallow breathing, the periods of breathing cycle were 3.00±0.19 sec (sound) and 3.00±0.21 sec (RPM); For deep breathing, the periods were 3.49± 0.11 sec (sound) and 3.49±0.12 sec (RPM). Compared with 4.54±0.66 sec period recorded by the calypso system, the sound measured 4.64±0.54 sec. The additional signal from sound could be supplement to the surface monitoring, and provide new parameters to model the hysteresis lung motion. Conclusion: Our preliminary study shows that the breathing sound signal can provide a comparable way as the RPM system to evaluate the respiratory motion. It’s instantaneous and robust characteristics facilitate it possibly to be a either independently or as auxiliary methods to manage respiratory motion in radiotherapy.« less

  17. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  18. Personal sound zone reproduction with room reflections

    NASA Astrophysics Data System (ADS)

    Olik, Marek

    Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.

  19. Development and modification of a Gaussian and non-Gaussian noise exposure system

    NASA Astrophysics Data System (ADS)

    Schlag, Adam W.

    Millions of people across the world currently have noise induced hearing loss, and many are working in conditions with both continuous Gaussian and non-Gaussian noises that could affect their hearing. It was hypothesized that the energy of the noise was the cause of the hearing loss and did not depend on temporal pattern of a noise. This was referred to as the equal energy hypothesis. This hypothesis has been shown to have limitations though. This means that there is a difference in the types of noise a person receives to induce hearing loss and it is necessary to build a system that can easily mimic various conditions to conduct research. This study builds a system that can produce both non-Gaussian impulse/impact noises and continuous Gaussian noise. It was found that the peak sound pressure level of the system could reach well above the needed 120 dB level to represent acoustic trauma and could replicate well above the 85 dB A-weighted sound pressure level to produce conditions of gradual developing hearing loss. The system reached a maximum of 150 dB sound peak pressure level and a maximum of 133 dB A-weighted sound pressure level. Various parameters could easily be adjusted to control the sound, such as the high and low cutoff frequency to center the sound at 4 kHz. The system build can easily be adjusted to create numerous sound conditions and will hopefully be modified and improved in hopes of eventually being used for animal studies to lead to the creation of a method to treat or prevent noise induced hearing loss.

  20. Hybrid mode-scattering/sound-absorbing segmented liner system and method

    NASA Technical Reports Server (NTRS)

    Walker, Bruce E. (Inventor); Hersh, Alan S. (Inventor); Rice, Edward J. (Inventor)

    1999-01-01

    A hybrid mode-scattering/sound-absorbing segmented liner system and method in which an initial sound field within a duct is steered or scattered into higher-order modes in a first mode-scattering segment such that it is more readily and effectively absorbed in a second sound-absorbing segment. The mode-scattering segment is preferably a series of active control components positioned along the annulus of the duct, each of which includes a controller and a resonator into which a piezoelectric transducer generates the steering noise. The sound-absorbing segment is positioned acoustically downstream of the mode-scattering segment, and preferably comprises a honeycomb-backed passive acoustic liner. The invention is particularly adapted for use in turbofan engines, both in the inlet and exhaust.

  1. Elementary Yoruba: Sound Drills and Greetings. Occasional Publication No. 18.

    ERIC Educational Resources Information Center

    Armstrong, Robert G.; Awujoola, Robert L.

    This introduction to elementary Yoruba is divided into two parts. The first section is on sound drills, and the second section concerns Yoruba greetings. The first part includes exercises to enable the student to master the Yoruba sound system. Emphasis is on pronunciation and recognition of the sounds and tones, but not memorization. A tape is…

  2. 49 CFR 325.71 - Scope of the rules in this subpart.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the sound level generated by a motor vehicle, as displayed on a sound level measurement system, during the measurement of the motor vehicle's sound level emissions at a test site which is not a standard site. (b) The purpose of adding or subtracting a correction factor is to equate the sound level reading...

  3. 49 CFR 325.71 - Scope of the rules in this subpart.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the sound level generated by a motor vehicle, as displayed on a sound level measurement system, during the measurement of the motor vehicle's sound level emissions at a test site which is not a standard site. (b) The purpose of adding or subtracting a correction factor is to equate the sound level reading...

  4. Speech-sound duration processing in a second language is specific to phonetic categories.

    PubMed

    Nenonen, Sari; Shestakova, Anna; Huotilainen, Minna; Näätänen, Risto

    2005-01-01

    The mismatch negativity (MMN) component of the auditory event-related potential was used to determine the effect of native language, Russian, on the processing of speech-sound duration in a second language, Finnish, that uses duration as a cue for phonological distinction. The native-language effect was compared with Finnish vowels that either can or cannot be categorized using the Russian phonological system. The results showed that the duration-change MMN for the Finnish sounds that could be categorized through Russian was reduced in comparison with that for the Finnish sounds having no Russian equivalent. In the Finnish sounds that can be mapped through the Russian phonological system, the facilitation of the duration processing may be inhibited by the native Russian language. However, for the sounds that have no Russian equivalent, new vowel categories independent of the native Russian language have apparently been established, enabling a native-like duration processing of Finnish.

  5. A System for Heart Sounds Classification

    PubMed Central

    Redlarski, Grzegorz; Gradolewski, Dawid; Palkowski, Aleksander

    2014-01-01

    The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases – one of the major causes of death around the globe – a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability. PMID:25393113

  6. Observing system simulations using synthetic radiances and atmospheric retrievals derived for the AMSU and HIRS in a mesoscale model. [Advanced Microwave Sounding Unit

    NASA Technical Reports Server (NTRS)

    Diak, George R.; Huang, Hung-Lung; Kim, Dongsoo

    1990-01-01

    The paper addresses the concept of synthetic satellite imagery as a visualization and diagnostic tool for understanding satellite sensors of the future and to detail preliminary results on the quality of soundings from the current sensors. Preliminary results are presented on the quality of soundings from the combination of the High-Resolution Infrared Radiometer Sounder and the Advanced Microwave Sounding Unit. Results are also presented on the first Observing System Simulation Experiment using this data in a mesoscale numerical prediction model.

  7. Sound-field reproduction systems using fixed-directivity loudspeakers.

    PubMed

    Poletti, M; Fazi, F M; Nelson, P A

    2010-06-01

    Sound reproduction systems using open arrays of loudspeakers in rooms suffer from degradations due to room reflections. These reflections can be reduced using pre-compensation of the loudspeaker signals, but this requires calibration of the array in the room, and is processor-intensive. This paper examines 3D sound reproduction systems using spherical arrays of fixed-directivity loudspeakers which reduce the sound field radiated outside the array. A generalized form of the simple source formulation and a mode-matching solution are derived for the required loudspeaker weights. The exterior field is derived and expressions for the exterior power and direct to reverberant ratio are derived. The theoretical results and simulations confirm that minimum interference occurs for loudspeakers which have hyper-cardioid polar responses.

  8. Hologlyphics: volumetric image synthesis performance system

    NASA Astrophysics Data System (ADS)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  9. Acoustic signatures of sound source-tract coupling.

    PubMed

    Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B

    2011-04-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society

  10. Acoustic signatures of sound source-tract coupling

    PubMed Central

    Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.

    2014-01-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213

  11. Stage separation study of Nike-Black Brant V Sounding Rocket System

    NASA Technical Reports Server (NTRS)

    Ferragut, N. J.

    1976-01-01

    A new Sounding Rocket System has been developed. It consists of a Nike Booster and a Black Brant V Sustainer with slanted fins which extend beyond its nozzle exit plane. A cursory look was taken at different factors which must be considered when studying a passive separation system. That is, one separation system without mechanical constraints in the axial direction and which will allow separation due to drag differential accelerations between the Booster and the Sustainer. The equations of motion were derived for rigid body motions and exact solutions were obtained. The analysis developed could be applied to any other staging problem of a Sounding Rocket System.

  12. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks

    PubMed Central

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-01-01

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well. PMID:27657088

  13. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks.

    PubMed

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-09-21

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.

  14. Feasibility of an electronic stethoscope system for monitoring neonatal bowel sounds.

    PubMed

    Dumas, Jasmine; Hill, Krista M; Adrezin, Ronald S; Alba, Jorge; Curry, Raquel; Campagna, Eric; Fernandes, Cecilia; Lamba, Vineet; Eisenfeld, Leonard

    2013-09-01

    Bowel dysfunction remains a major problem in neonates. Traditional auscultation of bowel sounds as a diagnostic aid in neonatal gastrointestinal complications is limited by skill and inability to document and reassess. Consequently, we built a unique prototype to investigate the feasibility of an electronic monitoring system for continuous assessment of bowel sounds. We attained approval by the Institutional Review Boards for the investigational study to test our system. The system incorporated a prototype stethoscope head with a built-in microphone connected to a digital recorder. Recordings made over extended periods were evaluated for quality. We also considered the acoustic environment of the hospital, where the stethoscope was used. The stethoscope head was attached to the abdomen with a hydrogel patch designed especially for this purpose. We used the system to obtain recordings from eight healthy, full-term babies. A scoring system was used to determine loudness, clarity, and ease of recognition comparing it to the traditional stethoscope. The recording duration was initially two hours and was increased to a maximum of eight hours. Median duration of attachment was three hours (3.75, 2.68). Based on the scoring, the bowel sound recording was perceived to be as loud and clear in sound reproduction as a traditional stethoscope. We determined that room noise and other noises were significant forms of interference in the recordings, which at times prevented analysis. However, no sound quality drift was noted in the recordings and no patient discomfort was noted. Minimal erythema was observed over the fixation site which subsided within one hour. We demonstrated the long-term recording of infant bowel sounds. Our contributions included a prototype stethoscope head, which was affixed using a specially designed hydrogel adhesive patch. Such a recording can be reviewed and reassessed, which is new technology and an improvement over current practice. The use of this system should also, theoretically, reduce risk of infection. Based on our research we concluded that while automatic assessment of bowel sounds is feasible over an extended period, there will be times when analysis is not possible. One limitation is noise interference. Our larger goals include producing a meaningful vital sign to characterize bowel sounds that can be produced in real-time, as well as providing automatic control for patient feeding pumps.

  15. Why Do People Like Loud Sound? A Qualitative Study.

    PubMed

    Welch, David; Fremaux, Guy

    2017-08-11

    Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address.

  16. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  17. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the vehicle at an angle that is consistent with the recommendation of the system's manufacturer. If... systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The...

  18. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... recommendation of the manufacturer of the sound level measurement system. (2) In no case shall the holder or... angle that is consistent with the recommendation of the system's manufacturer. If the manufacturer of... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...

  19. Embedded System Implementation of Sound Localization in Proximal Region

    NASA Astrophysics Data System (ADS)

    Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao

    A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.

  20. Sound-induced Interfacial Dynamics in a Microfluidic Two-phase Flow

    NASA Astrophysics Data System (ADS)

    Mak, Sze Yi; Shum, Ho Cheung

    2014-11-01

    Retrieving sound wave by a fluidic means is challenging due to the difficulty in visualizing the very minute sound-induced fluid motion. This work studies the interfacial response of multiphase systems towards fluctuation in the flow. We demonstrate a direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interface shows a passive response to sound of different frequencies with sufficiently precise time resolution, enabling the recording of musical notes and even subsequent reconstruction with high fidelity. This suggests that sensing and transmitting vibrations as tiny as those induced by sound could be realized in low interfacial tension systems. The robust control of the interfacial dynamics could be adopted for droplet and complex-fiber generation.

  1. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  2. Contaminant Concentrations in Storm Water Entering the Sinclair/Dyes Inlet Subasin of the Puget Sound, USA During Storm Event and Baseflow Conditions

    DTIC Science & Technology

    2007-03-01

    Contaminant Concentrations in Storm Water Entering the Sinclair/Dyes Inlet Subasin of the Puget Sound , USA During Storm Event and Baseflow Conditions...Johnston1 (Space and Naval Warfare Systems Center, Bremerton, WA, USA), Dwight E. Leisle, Bruce Beckwith, and Gerald Sherrell ( Puget Sound Naval Shipyard...The Sinclair and Dyes Inlet watershed is located on the west side of Puget Sound in Kitsap County, Washington, U.S.A. (Figure 1). Puget Sound Naval

  3. Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo

    2010-08-01

    A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.

  4. Effects of input processing and type of personal frequency modulation system on speech-recognition performance of adults with cochlear implants.

    PubMed

    Wolfe, Jace; Schafer, Erin; Parkinson, Aaron; John, Andrew; Hudson, Mary; Wheeler, Julie; Mucci, Angie

    2013-01-01

    The objective of this study was to compare speech recognition in quiet and in noise for cochlear implant recipients using two different types of personal frequency modulation (FM) systems (directly coupled [direct auditory input] versus induction neckloop) with each of two sound processors (Cochlear Nucleus Freedom versus Cochlear Nucleus 5). Two different experiments were conducted within this study. In both these experiments, mixing of the FM signal within the Freedom processor was implemented via the same scheme used clinically for the Freedom sound processor. In Experiment 1, the aforementioned comparisons were conducted with the Nucleus 5 programmed so that the microphone and FM signals were mixed and then the mixed signals were subjected to autosensitivity control (ASC). In Experiment 2, comparisons between the two FM systems and processors were conducted again with the Nucleus 5 programmed to provide a more complex multistage implementation of ASC during the preprocessing stage. This study was a within-subject, repeated-measures design. Subjects were recruited from the patient population at the Hearts for Hearing Foundation in Oklahoma City, OK. Fifteen subjects participated in Experiment 1, and 16 subjects participated in Experiment 2. Subjects were adults who had used either unilateral or bilateral cochlear implants for at least 1 year. In this experiment, no differences were found in speech recognition in quiet obtained with the two different FM systems or the various sound-processor conditions. With each sound processor, speech recognition in noise was better with the directly coupled direct auditory input system relative to the neckloop system. The multistage ASC processing of the Nucleus 5 sound processor provided better performance than the single-stage approach for the Nucleus 5 and the Nucleus Freedom sound processor. Speech recognition in noise is substantially affected by the type of sound processor, FM system, and implementation of ASC used by a Cochlear implant recipient.

  5. Structure-based modeling of head-related transfer functions towards interactive customization of binaural sound systems

    NASA Astrophysics Data System (ADS)

    Gupta, Navarun

    2003-10-01

    One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield "Model HRTFs" that can create elevation effects. Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues.

  6. Estimating occupant satisfaction of HVAC system noise using quality assessment index.

    PubMed

    Forouharmajd, Farhad; Nassiri, Parvin; Monazzam, Mohammad R; Yazdchi, Mohammadreza

    2012-01-01

    Noise may be defined as any unwanted sound. Sound becomes noise when it is too loud, unexpected, uncontrolled, happens at the wrong time, contains unwanted pure tones or unpleasant. In addition to being annoying, loud noise can cause hearing loss, and, depending on other factors, can affect stress level, sleep patterns and heart rate. The primary object for determining subjective estimations of loudness is to present sounds to a sample of listeners under controlled conditions. In heating, ventilation and air conditioning (HVAC) systems only the ventilation fan industry (e.g., bathroom exhaust and sidewall propeller fans) uses loudness ratings. In order to find satisfaction, percent of exposure to noise is the valuable issue for the personnel who are working in these areas. The room criterion (RC) method has been defined by ANSI standard S12.2, which is based on measured levels of in HVAC systems noise in spaces and is used primarily as a diagnostic tool. The RC method consists of a family of criteria curves and a rating procedure. RC measures background noise in the building over the frequency range of 16-4000 Hz. This rating system requires determination of the mid-frequency average level and determining the perceived balance between high-frequency (HF) sound and low-frequency (LF) sound. The arithmetic average of the sound levels in the 500, 1000 and 2000 Hz octave bands is 44.6 dB; therefore, the RC 45 curve is selected as the reference for spectrum quality evaluation. The spectral deviation factors in the LF, medium-frequency sound and HF regions are 2.9, 7.5 and -2.3, respectively, giving a Quality Assessment Index (QAI) of 9.8. This concludes the QAI is useful in estimating an occupant's probable reaction when the system design does not produce optimum sound quality. Thus, a QAI between 5 and 10 dB represents a marginal situation in which acceptance by an occupant is questionable. However, when sound pressure levels in the 16 or 31.5 Hz octave bands exceed 65 dB, vibration in lightweight office construction is possible.

  7. Development of multichannel analyzer using sound card ADC for nuclear spectroscopy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Maslina Mohd; Yussup, Nolida; Lombigit, Lojius

    This paper describes the development of Multi-Channel Analyzer (MCA) using sound card analogue to digital converter (ADC) for nuclear spectroscopy system. The system was divided into a hardware module and a software module. Hardware module consist of detector NaI (Tl) 2” by 2”, Pulse Shaping Amplifier (PSA) and a build in ADC chip from readily available in any computers’ sound system. The software module is divided into two parts which are a pre-processing of raw digital input and the development of the MCA software. Band-pass filter and baseline stabilization and correction were implemented for the pre-processing. For the MCA development,more » the pulse height analysis method was used to process the signal before displaying it using histogram technique. The development and tested result for using the sound card as an MCA are discussed.« less

  8. A Generalized Mechanism for Perception of Pitch Patterns

    PubMed Central

    Loui, Psyche; Wu, Elaine H.; Wessel, David L.; Knight, Robert T.

    2009-01-01

    Surviving in a complex and changeable environment relies upon the ability to extract probable recurring patterns. Here we report a neurophysiological mechanism for rapid probabilistic learning of a new system of music. Participants listened to different combinations of tones from a previously-unheard system of pitches based on the Bohlen-Pierce scale, with chord progressions that form 3:1 ratios in frequency, notably different from 2:1 frequency ratios in existing musical systems. Event-related brain potentials elicited by improbable sounds in the new music system showed emergence over a one-hour period of physiological signatures known to index sound expectation in standard Western music. These indices of expectation learning were eliminated when sound patterns were played equiprobably, and co-varied with individual behavioral differences in learning. These results demonstrate that humans utilize a generalized probability-based perceptual learning mechanism to process novel sound patterns in music. PMID:19144845

  9. Microscopic theory of longitudinal sound velocity in charge ordered manganites.

    PubMed

    Rout, G C; Panda, S

    2009-10-14

    A microscopic theory of longitudinal sound velocity in a manganite system is reported here. The manganite system is described by a model Hamiltonian consisting of charge density wave (CDW) interaction in the e(g) band, an exchange interaction between spins of the itinerant e(g) band electrons and the core t(2g) electrons, and the Heisenberg interaction of the core level spins. The magnetization and the CDW order parameters are considered within mean-field approximations. The phonon Green's function was calculated by Zubarev's technique and hence the longitudinal velocity of sound was finally calculated for the manganite system. The results show that the elastic spring involved in the velocity of sound exhibits strong stiffening in the CDW phase with a decrease in temperature as observed in experiments.

  10. An Interactive Neural Network System for Acoustic Signal Classification

    DTIC Science & Technology

    1990-02-28

    of environmental sounds. These include machinery noise ( Talamo , 1982), the sounds of metallic (Howard, 1983) and non-metallic impacts (Warren...backscattering of sound by spherical and elongated objects. JASA, 86, 1499-1510. Talamo , J. D. C. (1982). The perception of machinery indicator

  11. Head related transfer functions measurement and processing for the purpose of creating a spatial sound environment

    NASA Astrophysics Data System (ADS)

    Pec, Michał; Bujacz, Michał; Strumiłło, Paweł

    2008-01-01

    The use of Head Related Transfer Functions (HRTFs) in audio processing is a popular method of obtaining spatialized sound. HRTFs describe disturbances caused in the sound wave by the human body, especially by head and the ear pinnae. Since these shapes are unique, HRTFs differ greatly from person to person. For this reason measurement of personalized HRTFs is justified. Measured HRTFs also need further processing to be utilized in a system producing spatialized sound. This paper describes a system designed for efficient collecting of Head Related Transfer Functions as well as the measurement, interpolation and verification procedures.

  12. Linking the Shapes of Alphabet Letters to Their Sounds: The Case of Hebrew

    ERIC Educational Resources Information Center

    Treiman, Rebecca; Levin, Iris; Kessler, Brett

    2012-01-01

    Learning the sounds of letters is an important part of learning a writing system. Most previous studies of this process have examined English, focusing on variations in the phonetic iconicity of letter names as a reason why some letter sounds (such as that of b, where the sound is at the beginning of the letter's name) are easier to learn than…

  13. An Inexpensive Group FM Amplification System for the Classroom.

    ERIC Educational Resources Information Center

    Worner, William A.

    1988-01-01

    An inexpensive FM amplification system was developed to enhance auditory learning in classrooms for the hearing impaired. Evaluation indicated that the system equalizes the sound pressure level throughout the room, with the increased sound pressure level falling in the range of 70 to 73 decibels. (Author/DB)

  14. 46 CFR 197.332 - PVHO-Decompression chambers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... dogs, from both sides of a closed hatch; (e) Have interior illumination sufficient to allow visual... (m) Have a sound-powered headset or telephone as a backup to the communications system required by § 197.328(c) (5) and (6), except when that communications system is a sound-powered system. ...

  15. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  16. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  17. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  18. Minimizing noise in fiberglass aquaculture tanks: Noise reduction potential of various retrofits

    USGS Publications Warehouse

    Davidson, J.; Frankel, A.S.; Ellison, W.T.; Summerfelt, S.; Popper, A.N.; Mazik, P.; Bebak, J.

    2007-01-01

    Equipment used in intensive aquaculture systems, such as pumps and blowers can produce underwater sound levels and frequencies within the range of fish hearing. The impacts of underwater noise on fish are not well known, but limited research suggests that subjecting fish to noise could result in impairment of the auditory system, reduced growth rates, and increased stress. Consequently, reducing sound in fish tanks could result in advantages for cultured species and increased productivity for the aquaculture industry. The objective of this study was to evaluate the noise reduction potential of various retrofits to fiberglass fish culture tanks. The following structural changes were applied to tanks to reduce underwater noise: (1) inlet piping was suspended to avoid contact with the tank, (2) effluent piping was disconnected from a common drain line, (3) effluent piping was insulated beneath tanks, and (4) tanks were elevated on cement blocks and seated on insulated padding. Four combinations of the aforementioned structural changes were evaluated in duplicate and two tanks were left unchanged as controls. Control tanks had sound levels of 120.6 dB re 1 ??Pa. Each retrofit contributed to a reduction of underwater sound. As structural changes were combined, a cumulative reduction in sound level was observed. Tanks designed with a combination of retrofits had sound levels of 108.6 dB re 1 ??Pa, a four-fold reduction in sound pressure level. Sound frequency spectra indicated that the greatest sound reductions occurred between 2 and 100 Hz and demonstrated that nearby pumps and blowers created tonal frequencies that were transmitted into the tanks. The tank modifications used during this study were simple and inexpensive and could be applied to existing systems or considered when designing aquaculture facilities. ?? 2007 Elsevier B.V. All rights reserved.

  19. A lung sound classification system based on the rational dilation wavelet transform.

    PubMed

    Ulukaya, Sezer; Serbes, Gorkem; Sen, Ipek; Kahya, Yasemin P

    2016-08-01

    In this work, a wavelet based classification system that aims to discriminate crackle, normal and wheeze lung sounds is presented. While the previous works related with this problem use constant low Q-factor wavelets, which have limited frequency resolution and can not cope with oscillatory signals, in the proposed system, the Rational Dilation Wavelet Transform, whose Q-factors can be tuned, is employed. Proposed system yields an accuracy of 95 % for crackle, 97 % for wheeze, 93.50 % for normal and 95.17 % for total sound signal types using energy feature subset and proposed approach is superior to conventional low Q-factor wavelet analysis.

  20. Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants.

    PubMed

    Moore, Brian C J

    2003-03-01

    To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.

  1. Building Valued-Added Assessment into Michigan's Accountability System: Lessons from Other States. Research Report 1

    ERIC Educational Resources Information Center

    Lee, Kwangyhuyn; Weimer, Debbi

    2002-01-01

    Michigan is designing a new accountability system that combines high standards and statewide testing within a school accreditation framework. Sound assessment techniques are critical if the accountability system is to provide relevant information to schools and policymakers. One important component of a sound assessment system is measurement of…

  2. Early sound patterns in the speech of two Brazilian Portuguese speakers.

    PubMed

    Teixeira, Elizabeth Reis; Davis, Barbara L

    2002-06-01

    Sound patterns in the speech of two Brazilian-Portuguese speaking children are compared with early production patterns in English-learning children as well as English and Brazilian-Portuguese (BP) characteristics. The relationship between production system effects and ambient language influences in the acquisition of early sound patterns is of primary interest, as English and BP are characterized by differing phonological systems. Results emphasize the primacy of production system effects in early acquisition, although even the earliest word forms show evidence of perceptual effects from the ambient language in both BP children. Use of labials and coronals and low and midfront vowels in simple syllable shapes is consistent with acquisition data for this period across languages. However, potential ambient language influences include higher frequencies of dorsals, use of multisyllabic words, and different phone types in syllable-offset position. These results suggest that to fully understand early acquisition of sound systems one must account for both production system effects and perceptual effects from the ambient language.

  3. Left-Right Asymmetry in Spectral Characteristics of Lung Sounds Detected Using a Dual-Channel Auscultation System in Healthy Young Adults.

    PubMed

    Tsai, Jang-Zern; Chang, Ming-Lang; Yang, Jiun-Yue; Kuo, Dar; Lin, Ching-Hsiung; Kuo, Cheng-Deng

    2017-06-07

    Though lung sounds auscultation is important for the diagnosis and monitoring of lung diseases, the spectral characteristics of lung sounds have not been fully understood. This study compared the spectral characteristics of lung sounds between the right and left lungs and between healthy male and female subjects using a dual-channel auscultation system. Forty-two subjects aged 18-22 years without smoking habits and any known pulmonary diseases participated in this study. The lung sounds were recorded from seven pairs of auscultation sites on the chest wall simultaneously. We found that in four out of seven auscultation pairs, the lung sounds from the left lung had a higher total power (P T ) than those from the right lung. The P T of male subjects was higher than that of female ones in most auscultation pairs. The ratio of inspiration power to expiration power (R I/E ) of lung sounds from the right lung was greater than that from the left lung at auscultation pairs on the anterior chest wall, while this phenomenon was reversed at auscultation pairs on the posterior chest wall in combined subjects, and similarly in both male and female subjects. Though the frequency corresponding to maximum power density of lung sounds (F MPD ) from the left and right lungs was not significantly different, the frequency that equally divided the power spectrum of lung sounds (F 50 ) from the left lung was significantly smaller than that from the right lung at auscultation site on the anterior and lateral chest walls, while it was significantly larger than that of from the right lung at auscultation site on the posterior chest walls. In conclusion, significant differences in the P T , F MPD , F 50 , and R I/E between the left and right lungs at some auscultation pairs were observed by using a dual-channel auscultation system in this study. Structural differences between the left and the right lungs, between the female and male subjects, and between anterior and posterior lungs might account for the observed differences in the spectral characteristics of lung sounds. The dual-channel auscultation system might be useful for future development of digital stethoscopes and power spectral analysis of lung sounds in patients with various kinds of cardiopulmonary diseases.

  4. Left–Right Asymmetry in Spectral Characteristics of Lung Sounds Detected Using a Dual-Channel Auscultation System in Healthy Young Adults

    PubMed Central

    Tsai, Jang-Zern; Chang, Ming-Lang; Yang, Jiun-Yue; Kuo, Dar; Lin, Ching-Hsiung; Kuo, Cheng-Deng

    2017-01-01

    Though lung sounds auscultation is important for the diagnosis and monitoring of lung diseases, the spectral characteristics of lung sounds have not been fully understood. This study compared the spectral characteristics of lung sounds between the right and left lungs and between healthy male and female subjects using a dual-channel auscultation system. Forty-two subjects aged 18–22 years without smoking habits and any known pulmonary diseases participated in this study. The lung sounds were recorded from seven pairs of auscultation sites on the chest wall simultaneously. We found that in four out of seven auscultation pairs, the lung sounds from the left lung had a higher total power (PT) than those from the right lung. The PT of male subjects was higher than that of female ones in most auscultation pairs. The ratio of inspiration power to expiration power (RI/E) of lung sounds from the right lung was greater than that from the left lung at auscultation pairs on the anterior chest wall, while this phenomenon was reversed at auscultation pairs on the posterior chest wall in combined subjects, and similarly in both male and female subjects. Though the frequency corresponding to maximum power density of lung sounds (FMPD) from the left and right lungs was not significantly different, the frequency that equally divided the power spectrum of lung sounds (F50) from the left lung was significantly smaller than that from the right lung at auscultation site on the anterior and lateral chest walls, while it was significantly larger than that of from the right lung at auscultation site on the posterior chest walls. In conclusion, significant differences in the PT, FMPD, F50, and RI/E between the left and right lungs at some auscultation pairs were observed by using a dual-channel auscultation system in this study. Structural differences between the left and the right lungs, between the female and male subjects, and between anterior and posterior lungs might account for the observed differences in the spectral characteristics of lung sounds. The dual-channel auscultation system might be useful for future development of digital stethoscopes and power spectral analysis of lung sounds in patients with various kinds of cardiopulmonary diseases. PMID:28590447

  5. An integrated experimental and computational approach to material selection for sound proof thermally insulted enclosure of a power generation system

    NASA Astrophysics Data System (ADS)

    Waheed, R.; Tarar, W.; Saeed, H. A.

    2016-08-01

    Sound proof canopies for diesel power generators are fabricated with a layer of sound absorbing material applied to all the inner walls. The physical properties of the majority of commercially available sound proofing materials reveal that a material with high sound absorption coefficient has very low thermal conductivity. Consequently a good sound absorbing material is also a good heat insulator. In this research it has been found through various experiments that ordinary sound proofing materials tend to rise the inside temperature of sound proof enclosure in certain turbo engines by capturing the heat produced by engine and not allowing it to be transferred to atmosphere. The same phenomenon is studied by creating a finite element model of the sound proof enclosure and performing a steady state and transient thermal analysis. The prospects of using aluminium foam as sound proofing material has been studied and it is found that inside temperature of sound proof enclosure can be cut down to safe working temperature of power generator engine without compromise on sound proofing.

  6. Reducing audio stimulus presentation latencies across studies, laboratories, and hardware and operating system configurations.

    PubMed

    Babjack, Destiny L; Cernicky, Brandon; Sobotka, Andrew J; Basler, Lee; Struthers, Devon; Kisic, Richard; Barone, Kimberly; Zuccolotto, Anthony P

    2015-09-01

    Using differing computer platforms and audio output devices to deliver audio stimuli often introduces (1) substantial variability across labs and (2) variable time between the intended and actual sound delivery (the sound onset latency). Fast, accurate audio onset latencies are particularly important when audio stimuli need to be delivered precisely as part of studies that depend on accurate timing (e.g., electroencephalographic, event-related potential, or multimodal studies), or in multisite studies in which standardization and strict control over the computer platforms used is not feasible. This research describes the variability introduced by using differing configurations and introduces a novel approach to minimizing audio sound latency and variability. A stimulus presentation and latency assessment approach is presented using E-Prime and Chronos (a new multifunction, USB-based data presentation and collection device). The present approach reliably delivers audio stimuli with low latencies that vary by ≤1 ms, independent of hardware and Windows operating system (OS)/driver combinations. The Chronos audio subsystem adopts a buffering, aborting, querying, and remixing approach to the delivery of audio, to achieve a consistent 1-ms sound onset latency for single-sound delivery, and precise delivery of multiple sounds that achieves standard deviations of 1/10th of a millisecond without the use of advanced scripting. Chronos's sound onset latencies are small, reliable, and consistent across systems. Testing of standard audio delivery devices and configurations highlights the need for careful attention to consistency between labs, experiments, and multiple study sites in their hardware choices, OS selections, and adoption of audio delivery systems designed to sidestep the audio latency variability issue.

  7. Why Do People Like Loud Sound? A Qualitative Study

    PubMed Central

    Welch, David; Fremaux, Guy

    2017-01-01

    Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address. PMID:28800097

  8. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  9. Sound Clocks and Sonic Relativity

    NASA Astrophysics Data System (ADS)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  10. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds

    PubMed Central

    Chen, Chin-Hsing; Huang, Wen-Tzeng; Tan, Tan-Hsu; Chang, Cheng-Chun; Chang, Yuan-Jen

    2015-01-01

    A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs) were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications. PMID:26053756

  11. Wake Vortex Avoidance System and Method

    NASA Technical Reports Server (NTRS)

    Shams, Qamar A. (Inventor); Zuckerwar, Allan J. (Inventor); Knight, Howard K. (Inventor)

    2017-01-01

    A wake vortex avoidance system includes a microphone array configured to detect low frequency sounds. A signal processor determines a geometric mean coherence based on the detected low frequency sounds. A display displays wake vortices based on the determined geometric mean coherence.

  12. A Theory for the Function of the Spermaceti Organ of the Sperm Whale (Physeter Catodon L.)

    NASA Technical Reports Server (NTRS)

    Norris, K. S.; Harvey, G. W.

    1972-01-01

    The function of the spermaceti organ of the sperm whale is studied using a model of its acoustic system. Suggested functions of the system include: (1) action as an acoustic resonating and sound focussing chamber to form and process burst-pulsed clicks; (2) use of nasal passages in forehead for repeated recycling of air for phonation during dives and to provide mirrors for sound reflection and signal processing; and (3) use of the entire system to allow sound signal production especially useful for long range echolocofion in the deep sea.

  13. The National Aeronautics and Space Administration (NASA)/Goddard Space Flight Center (GSFC) sounding-rocket program

    NASA Technical Reports Server (NTRS)

    Guidotti, J. G.

    1976-01-01

    An overall introduction to the NASA sounding rocket program as managed by the Goddard Space Flight Center is presented. The various sounding rockets, auxiliary systems (telemetry, guidance, etc.), launch sites, and services which NASA can provide are briefly described.

  14. CONCENTRATIONS AND ENANTIOMERIC FRACTIONS OF CHLORDANE IN SEDIMENTS FROM LONG ISLAND SOUND

    EPA Science Inventory

    Long Island Sound (LIS) is one of the largest estuarine systems on the Atlantic coast of the United States, providing vital transportation and rich fishing and shell-fishing grounds for commercial interests. The Sound, however, has been contaminated with various pollutants, in...

  15. Somatotopic Semantic Priming and Prediction in the Motor System

    PubMed Central

    Grisoni, Luigi; Dreyer, Felix R.; Pulvermüller, Friedemann

    2016-01-01

    The recognition of action-related sounds and words activates motor regions, reflecting the semantic grounding of these symbols in action information; in addition, motor cortex exerts causal influences on sound perception and language comprehension. However, proponents of classic symbolic theories still dispute the role of modality-preferential systems such as the motor cortex in the semantic processing of meaningful stimuli. To clarify whether the motor system carries semantic processes, we investigated neurophysiological indexes of semantic relationships between action-related sounds and words. Event-related potentials revealed that action-related words produced significantly larger stimulus-evoked (Mismatch Negativity-like) and predictive brain responses (Readiness Potentials) when presented in body-part-incongruent sound contexts (e.g., “kiss” in footstep sound context; “kick” in whistle context) than in body-part-congruent contexts, a pattern reminiscent of neurophysiological correlates of semantic priming. Cortical generators of the semantic relatedness effect were localized in areas traditionally associated with semantic memory, including left inferior frontal cortex and temporal pole, and, crucially, in motor areas, where body-part congruency of action sound–word relationships was indexed by a somatotopic pattern of activation. As our results show neurophysiological manifestations of action-semantic priming in the motor cortex, they prove semantic processing in the motor system and thus in a modality-preferential system of the human brain. PMID:26908635

  16. Humpback whale bioacoustics: From form to function

    NASA Astrophysics Data System (ADS)

    Mercado, Eduardo, III

    This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.

  17. The Impact of Sound-Field Systems on Learning and Attention in Elementary School Classrooms

    ERIC Educational Resources Information Center

    Dockrell, Julie E.; Shield, Bridget

    2012-01-01

    Purpose: The authors evaluated the installation and use of sound-field systems to investigate the impact of these systems on teaching and learning in elementary school classrooms. Methods: The evaluation included acoustic surveys of classrooms, questionnaire surveys of students and teachers, and experimental testing of students with and without…

  18. What Types of Policies Are Required for a Constitutionally Sound, Efficient Educational System of Common Schools?

    ERIC Educational Resources Information Center

    La Brecque, Richard

    This paper clarifies core concepts in a Kentucky judge's decision that the State General Assembly has failed to provide an efficient system of common schools. Connecting "efficiency" of educational systems to "equality of educational opportunity," the paper argues that the realization of a constitutionally sound, efficient…

  19. Can You Hear Me Now? Come in Loud and Clear with a Wireless Classroom Audio System

    ERIC Educational Resources Information Center

    Smith, Mark

    2006-01-01

    As school performance under NCLB becomes increasingly important, districts can not afford to have barriers to learning. That is where wireless sound-field amplification systems come into play. Wireless sound-field amplification systems come in two types: radio frequency (RF) and infrared (IR). RF systems are based on FCC-approved FM and UHF bands…

  20. Real World Audio

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Crystal River Engineering was originally featured in Spinoff 1992 with the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. The Convolvotron was developed for Ames' research on virtual acoustic displays. Crystal River is a now a subsidiary of Aureal Semiconductor, Inc. and they together develop and market the technology, which is a 3-D (three dimensional) audio technology known commercially today as Aureal 3D (A-3D). The technology has been incorporated into video games, surround sound systems, and sound cards.

  1. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    PubMed Central

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721

  2. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    PubMed

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.

  3. 49 CFR 325.39 - Measurement procedure; highway operations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... shall be made of the sound level generated by a motor vehicle operating through the measurement area on..., acceleration or deceleration. (b) The sound level generated by the motor vehicle is the highest reading observed on the sound level measurement system as the vehicle passes through the measurement area...

  4. 49 CFR 325.39 - Measurement procedure; highway operations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... shall be made of the sound level generated by a motor vehicle operating through the measurement area on..., acceleration or deceleration. (b) The sound level generated by the motor vehicle is the highest reading observed on the sound level measurement system as the vehicle passes through the measurement area...

  5. Left Lateralized Enhancement of Orofacial Somatosensory Processing Due to Speech Sounds

    ERIC Educational Resources Information Center

    Ito, Takayuki; Johns, Alexis R.; Ostry, David J.

    2013-01-01

    Purpose: Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory…

  6. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    NASA Astrophysics Data System (ADS)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was extended. Additionally, we designed a sound insulator so as to realize a similar distribution of the particle velocity to that obtained using the optimized window function. Sound radiation was suppressed using a sound insulator put above the vibrating surface in the simulation using the three-dimensional finite element method. On the basis of this finding, it was suggested that near-field acoustic communication which suppressed sound radiation can be realized by applying the optimized window function to the particle velocity field.

  7. 40 CFR 63.1178 - For cupolas, what standards must I meet?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Begin within one hour after the alarm on a bag leak detection system sounds, and complete in a timely... § 63.1187 of this subpart. (2) When the alarm on a bag leak detection system sounds for more than five...

  8. Research on fiber Bragg grating heart sound sensing and wavelength demodulation method

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Miao, Chang-Yun; Gao, Hua; Gan, Jing-Meng; Li, Hong-Qiang

    2010-11-01

    Heart sound includes a lot of physiological and pathological information of heart and blood vessel. Heart sound detecting is an important method to gain the heart status, and has important significance to early diagnoses of cardiopathy. In order to improve sensitivity and reduce noise, a heart sound measurement method based on fiber Bragg grating was researched. By the vibration principle of plane round diaphragm, a heart sound sensor structure of fiber Bragg grating was designed and a heart sound sensing mathematical model was established. A formula of heart sound sensitivity was deduced and the theoretical sensitivity of the designed sensor is 957.11pm/KPa. Based on matched grating method, the experiment system was built, by which the excursion of reflected wavelength of the sensing grating was detected and the information of heart sound was obtained. Experiments show that the designed sensor can detect the heart sound and the reflected wavelength variety range is about 70pm. When the sampling frequency is 1 KHz, the extracted heart sound waveform by using the db4 wavelet has the same characteristics with a standard heart sound sensor.

  9. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  10. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  11. Acoustic signature recognition technique for Human-Object Interactions (HOI) in persistent surveillance systems

    NASA Astrophysics Data System (ADS)

    Alkilani, Amjad; Shirkhodaie, Amir

    2013-05-01

    Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.

  12. Point vortex model for prediction of sound generated by a wing with flap interacting with a passing vortex.

    PubMed

    Manela, A; Huang, L

    2013-04-01

    Acoustic signature of a rigid wing, equipped with a movable downstream flap and interacting with a line vortex, is studied in a two-dimensional low-Mach number flow. The flap is attached to the airfoil via a torsion spring, and the coupled fluid-structure interaction problem is analyzed using thin-airfoil methodology and application of the emended Brown and Michael equation. It is found that incident vortex passage above the airfoil excites flap motion at the system natural frequency, amplified above all other frequencies contained in the forcing vortex. Far-field radiation is analyzed using Powell-Howe analogy, yielding the leading order dipole-type signature of the system. It is shown that direct flap motion has a negligible effect on total sound radiation. The characteristic acoustic signature of the system is dominated by vortex sound, consisting of relatively strong leading and trailing edge interactions of the airfoil with the incident vortex, together with late-time wake sound resulting from induced flap motion. In comparison with the counterpart rigid (non-flapped) configuration, it is found that the flap may act as sound amplifier or absorber, depending on the value of flap-fluid natural frequency. The study complements existing analyses examining sound radiation in static- and detached-flap configurations.

  13. Using Sound to Modify Fish Behavior at Power-Production and Water-Control Facilities: A Workshop December 12-13, 1995. Phase II: Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Thomas J.; Popper, Arthur N.

    1997-06-01

    A workshop on ``Use of Sound for Fish Protection at Power-Production and Water-Control Facilities`` was held in Portland, Oregon on December 12--13, 1995. This workshop convened a 22-member panel of international experts from universities, industry, and government to share knowledge, questions, and ideas about using sound for fish guidance. Discussions involved in a broad range of indigenous migratory and resident fish species and fish-protection issues in river systems, with particular focus on the Columbia River Basin. Because the use of sound behavioral barriers for fish is very much in its infancy, the workshop was designed to address the many questionsmore » being asked by fishery managers and researchers about the feasibility and potential benefits of using sound to augment physical barriers for fish protection in the Columbia River system.« less

  14. A novel automated detection system for swallowing sounds during eating and speech under everyday conditions.

    PubMed

    Fukuike, C; Kodama, N; Manda, Y; Hashimoto, Y; Sugimoto, K; Hirata, A; Pan, Q; Maeda, N; Minagi, S

    2015-05-01

    The wave analysis of swallowing sounds has been receiving attention because the recording process is easy and non-invasive. However, up until now, an expert has been needed to visually examine the entire recorded wave to distinguish swallowing from other sounds. The purpose of this study was to establish a methodology to automatically distinguish the sound of swallowing from sound data recorded during a meal in the presence of everyday ambient sound. Seven healthy participants (mean age: 26·7 ± 1·3 years) participated in this study. A laryngeal microphone and a condenser microphone attached to the nostril were used for simultaneous recording. Recoding took place while participants were taking a meal and talking with a conversational partner. Participants were instructed to step on a foot pedal trigger switch when they swallowed, representing self-enumeration of swallowing, and also to achieve six additional noise-making tasks during the meal in a randomised manner. The automated analysis system correctly detected 342 out of the 352 self-enumerated swallowing events (sensitivity: 97·2%) and 479 out of the 503 semblable wave periods of swallowing (specificity: 95·2%). In this study, the automated detection system for swallowing sounds using a nostril microphone was able to detect the swallowing event with high sensitivity and specificity even under the conditions of daily life, thus showing potential utility in the diagnosis or screening of dysphagic patients in future studies. © 2014 John Wiley & Sons Ltd.

  15. Design of laser monitoring and sound localization system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang

    2013-08-01

    In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.

  16. Some aspects of coupling-induced sound absorption in enclosures.

    PubMed

    Sum, K S; Pan, J

    2003-08-01

    It is known that the coupling between a modally reactive boundary structure of an enclosure and the enclosed sound field induces absorption in the sound field. However, the effect of this absorption on the sound-field response can vary significantly, even when material properties of the structure and dimensions of the coupled system are not changed. Although there have been numerous investigations of coupling between a structure and an enclosed sound field, little work has been done in the area of sound absorption induced by the coupling. Therefore, characteristics of the absorption are not well understood and the extent of its influence on the behavior of the sound-field response is not clearly known. In this paper, the coupling of a boundary structure and an enclosed sound field in frequency bands above the low-frequency range is considered. Three aspects of the coupling-induced sound absorption are studied namely, the effects of exciting either the structure or the sound field directly, damping in the uncoupled sound field and damping in the uncoupled structure. The results provide an understanding of some features of the coupling-induced absorption and its significance to the sound-field response.

  17. Auditory mechanics in a bush-cricket: direct evidence of dual sound inputs in the pressure difference receiver

    PubMed Central

    Montealegre-Z, Fernando; Soulsbury, Carl D.; Robson Brown, Kate A.; Robert, Daniel

    2016-01-01

    The ear of the bush-cricket, Copiphora gorgonensis, consists of a system of paired eardrums (tympana) on each foreleg. In these insects, the ear is backed by an air-filled tube, the acoustic trachea (AT), which transfers sound from the prothoracic acoustic spiracle to the internal side of the eardrums. Both surfaces of the eardrums of this auditory system are exposed to sound, making it a directionally sensitive pressure difference receiver. A key feature of the AT is its capacity to reduce the velocity of sound propagation and alter the acoustic driving forces at the tympanum. The mechanism responsible for reduction in sound velocity in the AT remains elusive, yet it is deemed to depend on adiabatic or isothermal conditions. To investigate the biophysics of such multiple input ears, we used micro-scanning laser Doppler vibrometry and micro-computed X-ray tomography. We measured the velocity of sound propagation in the AT, the transmission gains across auditory frequencies and the time-resolved mechanical dynamics of the tympanal membranes in C. gorgonensis. Tracheal sound transmission generates a gain of approximately 15 dB SPL, and a propagation velocity of ca 255 m s−1, an approximately 25% reduction from free field propagation. Modelling tracheal acoustic behaviour that accounts for thermal and viscous effects, we conclude that reduction in sound velocity within the AT can be explained, among others, by heat exchange between the sound wave and the tracheal walls. PMID:27683000

  18. Applying cybernetic technology to diagnose human pulmonary sounds.

    PubMed

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  19. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A): Instrument logic diagrams

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report contains all of the block diagrams and internal logic diagrams for the Earth Observation System Advanced Microwave Sounding Unit-A (AMSU-A). These diagrams show the signal inputs, outputs, and internal signal flow for the AMSU-A.

  20. The Sound-to-Speech Translations Utilizing Graphics Mediation Interface for Students with Severe Handicaps. Final Report.

    ERIC Educational Resources Information Center

    Brown, Carrie; And Others

    This final report describes activities and outcomes of a research project on a sound-to-speech translation system utilizing a graphic mediation interface for students with severe disabilities. The STS/Graphics system is a voice recognition, computer-based system designed to allow individuals with mental retardation and/or severe physical…

  1. Intelligent Systems Approaches to Product Sound Quality Analysis

    NASA Astrophysics Data System (ADS)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach. Next, an unsupervised jury clustering algorithm is used to identify and classify subgroups within a jury who have conflicting preferences. In addition, a nested Artificial Neural Network (ANN) architecture is developed to predict subjective preference based on objective sound quality metrics, in the presence of non-linear preferences. Finally, statistical decomposition and correlation algorithms are reviewed that can help an analyst establish a clear understanding of the variability of the product sounds used as inputs into the jury study and to identify correlations between preference scores and sound quality metrics in the presence of non-linearities.

  2. Monaural Sound Localization Based on Structure-Induced Acoustic Resonance

    PubMed Central

    Kim, Keonwook; Kim, Youngwoong

    2015-01-01

    A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. PMID:25668214

  3. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  4. New Research on MEMS Acoustic Vector Sensors Used in Pipeline Ground Markers

    PubMed Central

    Song, Xiaopeng; Jian, Zeming; Zhang, Guojun; Liu, Mengran; Guo, Nan; Zhang, Wendong

    2015-01-01

    According to the demands of current pipeline detection systems, the above-ground marker (AGM) system based on sound detection principle has been a major development trend in pipeline technology. A novel MEMS acoustic vector sensor for AGM systems which has advantages of high sensitivity, high signal-to-noise ratio (SNR), and good low frequency performance has been put forward. Firstly, it is presented that the frequency of the detected sound signal is concentrated in a lower frequency range, and the sound attenuation is relatively low in soil. Secondly, the MEMS acoustic vector sensor structure and basic principles are introduced. Finally, experimental tests are conducted and the results show that in the range of 0°∼90°, when r = 5 m, the proposed MEMS acoustic vector sensor can effectively detect sound signals in soil. The measurement errors of all angles are less than 5°. PMID:25609046

  5. Wearable Eating Habit Sensing System Using Internal Body Sound

    NASA Astrophysics Data System (ADS)

    Shuzo, Masaki; Komori, Shintaro; Takashima, Tomoko; Lopez, Guillaume; Tatsuta, Seiji; Yanagimoto, Shintaro; Warisawa, Shin'ichi; Delaunay, Jean-Jacques; Yamada, Ichiro

    Continuous monitoring of eating habits could be useful in preventing lifestyle diseases such as metabolic syndrome. Conventional methods consist of self-reporting and calculating mastication frequency based on the myoelectric potential of the masseter muscle. Both these methods are significant burdens for the user. We developed a non-invasive, wearable sensing system that can record eating habits over a long period of time in daily life. Our sensing system is composed of two bone conduction microphones placed in the ears that send internal body sound data to a portable IC recorder. Applying frequency spectrum analysis on the collected sound data, we could not only count the number of mastications during eating, but also accurately differentiate between eating, drinking, and speaking activities. This information can be used to evaluate the regularity of meals. Moreover, we were able to analyze sound features to classify the types of foods eaten by food texture.

  6. Development of a directivity controlled piezoelectric transducer for sound reproduction

    NASA Astrophysics Data System (ADS)

    Bédard, Magella; Berry, Alain

    2005-04-01

    One of the inherent limitations of loudspeaker systems in audio reproduction is their inability to reproduce the possibly complex acoustic directivity patterns of real sound sources. For music reproduction for example, it may be desirable to separate diffuse field and direct sound components and project them with different directivity patterns. Because of their properties, poly (vinylidene fluoride) (PVDF) films offer lot of advantages for the development of electroacoustic transducers. A system of piezoelectric transducers made with PVDF that show a controllable directivity was developed. A cylindrical omnidirectional piezoelectric transducer is used to produce an ambient field, and a piezoelectric transducers system, consisting of a series of curved sources placed around a cylinder frame, is used to produce a sound field with a given directivity. To develop the system, a numerical model was generated with ANSYS Multiphysics TM8.1 and used to calculate the mechanical response of the piezoelectric transducer. The acoustic radiation of the driver was then computed using the Kirchoff-Helmoltz theorem. Numerical and experimental results of the mechanical and acoustical response of the system will be shown.

  7. The Sound-Amplified Environment and Reading Achievement in Elementary Students

    ERIC Educational Resources Information Center

    Betebenner, Elizabeth Whytlaw

    2011-01-01

    This study was designed to address the result using sound enhancement technology in classrooms as a method of enhancing the auditory experience for students seated in the rear sections of classrooms. Previous research demonstrated the efficacy of using sound distribution systems (SDS) in real time to enhance speech perception (Anderson &…

  8. Standing Sound Waves in Air with DataStudio

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2010-01-01

    Two experiments related to standing sound waves in air are adapted for using the ScienceWorkshop data-acquisition system with the DataStudio software from PASCO scientific. First, the standing waves are created by reflection from a plane reflector. The distribution of the sound pressure along the standing wave is measured. Second, the resonance…

  9. Distributed Processing and Cortical Specialization for Speech and Environmental Sounds in Human Temporal Cortex

    ERIC Educational Resources Information Center

    Leech, Robert; Saygin, Ayse Pinar

    2011-01-01

    Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…

  10. An FPGA-Based Rapid Wheezing Detection System

    PubMed Central

    Lin, Bor-Shing; Yen, Tian-Shiue

    2014-01-01

    Wheezing is often treated as a crucial indicator in the diagnosis of obstructive pulmonary diseases. A rapid wheezing detection system may help physicians to monitor patients over the long-term. In this study, a portable wheezing detection system based on a field-programmable gate array (FPGA) is proposed. This system accelerates wheezing detection, and can be used as either a single-process system, or as an integrated part of another biomedical signal detection system. The system segments sound signals into 2-second units. A short-time Fourier transform was used to determine the relationship between the time and frequency components of wheezing sound data. A spectrogram was processed using 2D bilateral filtering, edge detection, multithreshold image segmentation, morphological image processing, and image labeling, to extract wheezing features according to computerized respiratory sound analysis (CORSA) standards. These features were then used to train the support vector machine (SVM) and build the classification models. The trained model was used to analyze sound data to detect wheezing. The system runs on a Xilinx Virtex-6 FPGA ML605 platform. The experimental results revealed that the system offered excellent wheezing recognition performance (0.912). The detection process can be used with a clock frequency of 51.97 MHz, and is able to perform rapid wheezing classification. PMID:24481034

  11. Photoacoustics and speed-of-sound dual mode imaging with a long depth-of-field by using annular ultrasound array.

    PubMed

    Ding, Qiuning; Tao, Chao; Liu, Xiaojun

    2017-03-20

    Speed-of-sound and optical absorption reflect the structure and function of tissues from different aspects. A dual-mode microscopy system based on a concentric annular ultrasound array is proposed to simultaneously acquire the long depth-of-field images of speed-of-sound and optical absorption of inhomogeneous samples. First, speed-of-sound is decoded from the signal delay between each element of the annular array. The measured speed-of-sound could not only be used as an image contrast, but also improve the resolution and accuracy of spatial location of photoacoustic image in inhomogeneous acoustic media. Secondly, benefitting from dynamic focusing of annular array and the measured speed-of-sound, it is achieved an advanced acoustic-resolution photoacoustic microscopy with a precise position and a long depth-of-field. The performance of the dual-mode imaging system has been experimentally examined by using a custom-made annular array. The proposed dual-mode microscopy might have the significances in monitoring the biological physiological and pathological processes.

  12. Development of An Empirical Water Quality Model for Stormwater Based on Watershed Land Use in Puget Sound

    DTIC Science & Technology

    2007-03-29

    Development of An Empirical Water Quality Model for Stormwater Based on Watershed Land Use in Puget Sound Valerie I. Cullinan, Christopher W. May...Systems Center, Bremerton, WA) Introduction The Sinclair and Dyes Inlet watershed is located on the west side of Puget Sound in Kitsap County...Washington, U.S.A. (Figure 1). The Puget Sound Naval Shipyard (PSNS), U.S Environmental Protection Agency (USEPA), the Washington State Department of

  13. Decoding sound level in the marmoset primary auditory cortex.

    PubMed

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-10-01

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.

  14. Marine/Ferries Component 1995 Update of the Metropolitan Transportation Plan for the Central Puget Sound Region

    DOT National Transportation Integrated Search

    1994-05-01

    The ferry system functions as a set of marine highway links in the metropolitan transportation system. Since bridge alternatives have been virtually eliminated from consideration for cross Sound travel due to cost and public dissent, the ferries are ...

  15. Effect of ultrasonic, sonic and rotating-oscillating powered toothbrushing systems on surface roughness and wear of white spot lesions and sound enamel: An in vitro study.

    PubMed

    Hernandé-Gatón, Patrícia; Palma-Dibb, Regina Guenka; Silva, Léa Assed Bezerra da; Faraoni, Juliana Jendiroba; de Queiroz, Alexandra Mussolino; Lucisano, Marília Pacífico; Silva, Raquel Assed Bezerra da; Nelson Filho, Paulo

    2018-04-01

    To evaluate the effect of ultrasonic, sonic and rotating-oscillating powered toothbrushing systems on surface roughness and wear of white spot lesions and sound enamel. 40 tooth segments obtained from third molar crowns had the enamel surface divided into thirds, one of which was not subjected to toothbrushing. In the other two thirds, sound enamel and enamel with artificially induced white spot lesions were randomly assigned to four groups (n=10) : UT: ultrasonic toothbrush (Emmi-dental); ST1: sonic toothbrush (Colgate ProClinical Omron); ST2: sonic toothbrush (Sonicare Philips); and ROT: rotating-oscillating toothbrush (control) (Oral-B Professional Care Triumph 5000 with SmartGuide). The specimens were analyzed by confocal laser microscopy for surface roughness and wear. Data were analyzed statistically by paired t-tests, Kruskal-Wallis, two-way ANOVA and Tukey's post-test (α= 0.05). The different powered toothbrushing systems did not cause a significant increase in the surface roughness of sound enamel (P> 0.05). In the ROT group, the roughness of white spot lesion surface increased significantly after toothbrushing and differed from the UT group (P< 0.05). In the ROT group, brushing promoted a significantly greater wear of white spot lesion compared with sound enamel, and this group differed significantly from the ST1 group (P< 0.05). None of the powered toothbrushing systems (ultrasonic, sonic and rotating-oscillating) caused significant alterations on sound dental enamel. However, conventional rotating-oscillating toothbrushing on enamel with white spot lesion increased surface roughness and wear. None of the powered toothbrushing systems (ultrasonic, sonic and rotating-oscillating) tested caused significant alterations on sound dental enamel. However, conventional rotating-oscillating toothbrushing on enamel with white spot lesion increased surface roughness and wear. Copyright©American Journal of Dentistry.

  16. Diversity of acoustic tracheal system and its role for directional hearing in crickets

    PubMed Central

    2013-01-01

    Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512

  17. Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse

    PubMed Central

    Moser, Tobias; Neef, Andreas; Khimich, Darina

    2006-01-01

    Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948

  18. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  19. The hearing threshold of a harbor porpoise (Phocoena phocoena) for impulsive sounds (L).

    PubMed

    Kastelein, Ronald A; Gransier, Robin; Hoek, Lean; de Jong, Christ A F

    2012-08-01

    The distance at which harbor porpoises can hear underwater detonation sounds is unknown, but depends, among other factors, on the hearing threshold of the species for impulsive sounds. Therefore, the underwater hearing threshold of a young harbor porpoise for an impulsive sound, designed to mimic a detonation pulse, was quantified by using a psychophysical technique. The synthetic exponential pulse with a 5 ms time constant was produced and transmitted by an underwater projector in a pool. The resulting underwater sound, though modified by the response of the projection system and by the pool, exhibited the characteristic features of detonation sounds: A zero to peak sound pressure level of at least 30 dB (re 1 s(-1)) higher than the sound exposure level, and a short duration (34 ms). The animal's 50% detection threshold for this impulsive sound occurred at a received unweighted broadband sound exposure level of 60 dB re 1 μPa(2)s. It is shown that the porpoise's audiogram for short-duration tonal signals [Kastelein et al., J. Acoust. Soc. Am. 128, 3211-3222 (2010)] can be used to estimate its hearing threshold for impulsive sounds.

  20. Constructing Noise-Invariant Representations of Sound in the Auditory Pathway

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.

    2013-01-01

    Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596

  1. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  2. Earth Observing System/Advanced Microwave SoundingUnit-A (EOS/AMSU-A): Acquisition activities plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This is the acquisition activities plan for the software to be used in the Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A) system. This document is submitted in response to Contract NAS5-323 14 as CDRL 508. The procurement activities required to acquire software for the EOS/AMSU-A program are defined.

  3. How Should Children with Speech Sound Disorders be Classified? A Review and Critical Evaluation of Current Classification Systems

    ERIC Educational Resources Information Center

    Waring, R.; Knight, R.

    2013-01-01

    Background: Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of…

  4. Method for Determination of the Wind Velocity and Direction

    NASA Technical Reports Server (NTRS)

    Dahlin, Goesta Johan

    1988-01-01

    Accurate determination of the position of an artillery piece, for example, using sound measurement systems through measurement of the muzzle noise requires access to wind data that is representative of the portion of the air from where the sound wave is propagated up the microphone base of the system. The invention provides a system for determining such representative wind data.

  5. Development of the low-cost multi-channel analyzer system for γ-ray spectroscopy with a PC sound card

    NASA Astrophysics Data System (ADS)

    Sugihara, Kenkoh; Nakamura, Satoshi N.; Chiga, Nobuyuki; Fujii, Yuu; Tamura, Hirokazu

    2013-10-01

    A low-cost multi-channel analyzer (MCA) system was developed using a custom-build interface circuit and a PC sound card. The performance of the system was studied using γ-ray spectroscopy measurements with a NaI(Tl) scintillation detector. Our system successfully measured the energy of γ-rays at a rate of 1000 counts per second (cps).

  6. Top-down modulation of auditory processing: effects of sound context, musical expertise and attentional focus.

    PubMed

    Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D

    2009-10-01

    By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.

  7. Handbook of Super 8 Production.

    ERIC Educational Resources Information Center

    Telzer, Ronnie, Ed.

    This handbook is designed for anyone interested in producing super 8 films at any level of complexity and cost. Separate chapters present detailed discussions of the following topics: super 8 production systems and super 8 shooting and editing systems; budgeting; cinematography and sound recording; preparing to edit; editing; mixing sound tracks;…

  8. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  9. A Logical Letter-Sound System in Five Phonic Generalizations

    ERIC Educational Resources Information Center

    Gates, Louis; Yale, Ian

    2011-01-01

    In five phonic generalizations, this article introduces a logical system of letter-sound relationships. Ranging from 91% to 99% phonic transparency, these statements generalize a study of 16,928 words in children's literature. The r-controlled vowels aside, the analysis shows 54 basic transparent letters and letter combinations, 39 transparent…

  10. Digital PIV Measurements of Acoustic Particle Displacements in a Normal Incidence Impedance Tube

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Bartram, Scott M.; Parrott, Tony L.; Jones, Michael G.

    1998-01-01

    Acoustic particle displacements and velocities inside a normal incidence impedance tube have been successfully measured for a variety of pure tone sound fields using Digital Particle Image Velocimetry (DPIV). The DPIV system utilized two 600-mj Nd:YAG lasers to generate a double-pulsed light sheet synchronized with the sound field and used to illuminate a portion of the oscillatory flow inside the tube. A high resolution (1320 x 1035 pixel), 8-bit camera was used to capture double-exposed images of 2.7-micron hollow silicon dioxide tracer particles inside the tube. Classical spatial autocorrelation analysis techniques were used to ascertain the acoustic particle displacements and associated velocities for various sound field intensities and frequencies. The results show that particle displacements spanning a range of 1-60 microns can be measured for incident sound pressure levels of 100-130 dB and for frequencies spanning 500-1000 Hz. The ability to resolve 1 micron particle displacements at sound pressure levels in the 100 dB range allows the use of DPIV systems for measurement of sound fields at much lower sound pressure levels than had been previously possible. Representative impedance tube data as well as an uncertainty analysis for the measurements are presented.

  11. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  12. Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section

    NASA Technical Reports Server (NTRS)

    Brooks, T. F.; Scheiman, J.; Silcox, R. J.

    1976-01-01

    Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.

  13. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  14. NOAA/NESDIS Operational Sounding Processing Systems using the hyperspectral and microwaves sounders data from CrIS/ATMS, IASI/AMSU, and ATOVS

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.

    2016-12-01

    The current operational polar sounding systems running at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS) for processing the sounders data from the Cross-track Infrared (CrIS) onboard the Suomi National Polar-orbiting Partnership (SNPP) under the Joint Polar Satellite System (JPSS) program; the Infrared Atmospheric Sounding Interferometer (IASI) onboard Metop-1 and Metop-2 satellites under the program managed by the European Organization for the Exploitation of Meteorological (EUMETSAT); and the Advanced TIROS (Television and Infrared Observation Satellite) Operational Vertical Sounding (ATOVS) onboard NOAA-19 in the NOAA series of Polar Orbiting Environmental Satellites (POES), Metop-1 and Metop-2. In a series of advanced operational sounders CrIS and IASI provide more accurate, detailed temperature and humidity profiles; trace gases such as ozone, nitrous oxide, carbon dioxide, and methane; outgoing longwave radiation; and the cloud cleared radiances (CCR) on a global scale and these products are available to the operational user community. This presentation will highlight the tools developed for the NOAA Unique Combined Atmospheric Processing System (NUCAPS), which will discuss the Environmental Satellites Processing Center (ESPC) system architecture involving sounding data processing and distribution for CrIS, IASI, and ATOVS sounding products. Discussion will also include the improvements made for data quality measurements, granule processing and distribution, and user timeliness requirements envisioned from the next generation of JPSS and GOES-R satellites. There have been significant changes in the operational system due to system upgrades, algorithm updates, and value added data products and services. Innovative tools to better monitor performance and quality assurance of the operational sounder and imager products from the CrIS/ATMS, IASI and ATOVS have been developed and deployed at the Office of Satellite and Product Operations (OSPO). The incorporation of these tools in the OSPO operation has facilitated the diagnosis and resolution of problems when detected in the operational environment.

  15. A Hearing-Based, Frequency Domain Sound Quality Model for Combined Aerodynamic and Power Transmission Response with Application to Rotorcraft Interior Noise

    NASA Astrophysics Data System (ADS)

    Sondkar, Pravin B.

    The severity of combined aerodynamics and power transmission response in high-speed, high power density systems such as a rotorcraft is still a major cause of annoyance in spite of recent advancement in passive, semi-active and active control. With further increase in the capacity and power of this class of machinery systems, the acoustic noise levels are expected to increase even more. To achieve further improvements in sound quality, a more refined understanding of the factors and attributes controlling human perception is needed. In the case of rotorcraft systems, the perceived quality of the interior sound field is a major determining factor of passenger comfort. Traditionally, this sound quality factor is determined by measuring the response of a chosen set of juries who are asked to compare their qualitative reactions to two or more sounds based on their subjective impressions. This type of testing is very time-consuming, costly, often inconsistent, and not useful for practical design purposes. Furthermore, there is no known universal model for sound quality. The primary aim of this research is to achieve significant improvements in quantifying the sound quality of combined aerodynamic and power transmission response in high-speed, high power density machinery systems such as a rotorcraft by applying relevant objective measures related to the spectral characteristics of the sound field. Two models have been proposed in this dissertation research. First, a classical multivariate regression analysis model based on currently known sound quality metrics as well some new metrics derived in this study is presented. Even though the analysis resulted in the best possible multivariate model as a measure of the acoustic noise quality, it lacks incorporation of human judgment mechanism. The regression model can change depending on specific application, nature of the sounds and types of juries used in the study. Also, it predicts only the averaged preference scores and does not explain why two jury members differ in their judgment. To address the above shortcoming of applying regression analysis, a new human judgment model is proposed to further improve the ability to predict the degree of subjective annoyance. The human judgment model involves extraction of subjective attributes and their values using a proposed artificial jury processor. In this approach, a set of ear transfer functions are employed to compute the characteristics of sound pressure waves as perceived subjectively by human. The resulting basilar membrane displacement data from this proposed model is then applied to analyze the attribute values. Using this proposed human judgment model, the human judgment mechanism, which is highly sophisticated, will be examined. Since the human judgment model is essentially based on jury attributes that are not expected to change significantly with application or nature of the sound field, it gives a more common basis to evaluate sound quality. This model also attempts to explain the inter-juror differences in opinion, which is critical in understanding the variability in human response.

  16. An open real-time tele-stethoscopy system.

    PubMed

    Foche-Perez, Ignacio; Ramirez-Payba, Rodolfo; Hirigoyen-Emparanza, German; Balducci-Gonzalez, Fernando; Simo-Reigadas, Francisco-Javier; Seoane-Pascual, Joaquin; Corral-Peñafiel, Jaime; Martinez-Fernandez, Andres

    2012-08-23

    Acute respiratory infections are the leading cause of childhood mortality. The lack of physicians in rural areas of developing countries makes difficult their correct diagnosis and treatment. The staff of rural health facilities (health-care technicians) may not be qualified to distinguish respiratory diseases by auscultation. For this reason, the goal of this project is the development of a tele-stethoscopy system that allows a physician to receive real-time cardio-respiratory sounds from a remote auscultation, as well as video images showing where the technician is placing the stethoscope on the patient's body. A real-time wireless stethoscopy system was designed. The initial requirements were: 1) The system must send audio and video synchronously over IP networks, not requiring an Internet connection; 2) It must preserve the quality of cardiorespiratory sounds, allowing to adapt the binaural pieces and the chestpiece of standard stethoscopes, and; 3) Cardiorespiratory sounds should be recordable at both sides of the communication. In order to verify the diagnostic capacity of the system, a clinical validation with eight specialists has been designed. In a preliminary test, twelve patients have been auscultated by all the physicians using the tele-stethoscopy system, versus a local auscultation using traditional stethoscope. The system must allow listen the cardiac (systolic and diastolic murmurs, gallop sound, arrhythmias) and respiratory (rhonchi, rales and crepitations, wheeze, diminished and bronchial breath sounds, pleural friction rub) sounds. The design, development and initial validation of the real-time wireless tele-stethoscopy system are described in detail. The system was conceived from scratch as open-source, low-cost and designed in such a way that many universities and small local companies in developing countries may manufacture it. Only free open-source software has been used in order to minimize manufacturing costs and look for alliances to support its improvement and adaptation. The microcontroller firmware code, the computer software code and the PCB schematics are available for free download in a subversion repository hosted in SourceForge. It has been shown that real-time tele-stethoscopy, together with a videoconference system that allows a remote specialist to oversee the auscultation, may be a very helpful tool in rural areas of developing countries.

  17. Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.

    PubMed

    Padilla-Ortiz, Ana L; Ibarra, David

    2018-01-01

    Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.

  18. 46 CFR 28.400 - Radar and depth sounding devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Radar and depth sounding devices. 28.400 Section 28.400... Operate With More Than 16 Individuals on Board § 28.400 Radar and depth sounding devices. (a) Each vessel must be fitted with a general marine radar system for surface navigation with a radar screen mounted at...

  19. 46 CFR 28.400 - Radar and depth sounding devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Radar and depth sounding devices. 28.400 Section 28.400... Operate With More Than 16 Individuals on Board § 28.400 Radar and depth sounding devices. (a) Each vessel must be fitted with a general marine radar system for surface navigation with a radar screen mounted at...

  20. 46 CFR 28.875 - Radar, depth sounding, and auto-pilot.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Radar, depth sounding, and auto-pilot. 28.875 Section 28... COMMERCIAL FISHING INDUSTRY VESSELS Aleutian Trade Act Vessels § 28.875 Radar, depth sounding, and auto-pilot. (a) Each vessel must be fitted with a general marine radar system for surface navigation with a radar...

  1. 46 CFR 28.400 - Radar and depth sounding devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Radar and depth sounding devices. 28.400 Section 28.400... Operate With More Than 16 Individuals on Board § 28.400 Radar and depth sounding devices. (a) Each vessel must be fitted with a general marine radar system for surface navigation with a radar screen mounted at...

  2. 46 CFR 28.400 - Radar and depth sounding devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Radar and depth sounding devices. 28.400 Section 28.400... Operate With More Than 16 Individuals on Board § 28.400 Radar and depth sounding devices. (a) Each vessel must be fitted with a general marine radar system for surface navigation with a radar screen mounted at...

  3. 46 CFR 28.400 - Radar and depth sounding devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Radar and depth sounding devices. 28.400 Section 28.400... Operate With More Than 16 Individuals on Board § 28.400 Radar and depth sounding devices. (a) Each vessel must be fitted with a general marine radar system for surface navigation with a radar screen mounted at...

  4. The influence of underwater data transmission sounds on the displacement behaviour of captive harbour seals (Phoca vitulina).

    PubMed

    Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V

    2006-02-01

    To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.

  5. Azimuthal sound localization in the European starling (Sturnus vulgaris): I. Physical binaural cues.

    PubMed

    Klump, G M; Larsen, O N

    1992-02-01

    The physical measurements reported here test whether the European starling (Sturnus vulgaris) evaluates the azimuth direction of a sound source with a peripheral auditory system composed of two acoustically coupled pressure-difference receivers (1) or of two decoupled pressure receivers (2). A directional pattern of sound intensity in the free-field was measured at the entrance of the auditory meatus using a probe microphone, and at the tympanum using laser vibrometry. The maximum differences in the sound-pressure level measured with the microphone between various speaker positions and the frontal speaker position were 2.4 dB at 1 and 2 kHz, 7.3 dB at 4 kHz, 9.2 dB at 6 kHz, and 10.9 dB at 8 kHz. The directional amplitude pattern measured by laser vibrometry did not differ from that measured with the microphone. Neither did the directional pattern of travel times to the ear. Measurements of the amplitude and phase transfer function of the starling's interaural pathway using a closed sound system were in accord with the results of the free-field measurements. In conclusion, although some sound transmission via the interaural canal occurred, the present experiments support the hypothesis 2 above that the starling's peripheral auditory system is best described as consisting of two functionally decoupled pressure receivers.

  6. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, SShao-sheng R.; Allen, Christopher S.

    2009-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.

  7. Interactive Sonification of Spontaneous Movement of Children—Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound

    PubMed Central

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data. PMID:27891074

  8. Interactive Sonification of Spontaneous Movement of Children-Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound.

    PubMed

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.

  9. Re-Sonification of Objects, Events, and Environments

    NASA Astrophysics Data System (ADS)

    Fink, Alex M.

    Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.

  10. Sound attenuation of fiberglass lined ventilation ducts

    NASA Astrophysics Data System (ADS)

    Albright, Jacob

    Sound attenuation is a crucial part of designing any HVAC system. Most ventilation systems are designed to be in areas occupied by one or more persons. If these systems do not adequately attenuate the sound of the supply fan, compressor, or any other source of sound, the affected area could be subject to an array of problems ranging from an annoying hum to a deafening howl. The goals of this project are to quantify the sound attenuation properties of fiberglass duct liner and to perform a regression analysis to develop equations to predict insertion loss values for both rectangular and round duct liners. The first goal was accomplished via insertion loss testing. The tests performed conformed to the ASTM E477 standard. Using the insertion loss test data, regression equations were developed to predict insertion loss values for rectangular ducts ranging in size from 12-in x 18-in to 48-in x 48-in in lengths ranging from 3ft to 30ft. Regression equations were also developed to predict insertion loss values for round ducts ranging in diameters from 12-in to 48-in in lengths ranging from 3ft to 30ft.

  11. Sound absorption of microperforated panels inside compact acoustic enclosures

    NASA Astrophysics Data System (ADS)

    Yang, Cheng; Cheng, Li

    2016-01-01

    This paper investigates the sound absorption effect of microperforated panels (MPPs) in small-scale enclosures, an effort stemming from the recent interests in using MPPs for noise control in compact mechanical systems. Two typical MPP backing cavity configurations (an empty backing cavity and a honeycomb backing structure) are studied. Although both configurations provide basically the same sound absorption curves from standard impedance tube measurements, their in situ sound absorption properties, when placed inside a small enclosure, are drastically different. This phenomenon is explained using a simple system model based on modal analyses. It is shown that the accurate prediction of the in situ sound absorption of the MPPs inside compact acoustic enclosures requires meticulous consideration of the configuration of the backing cavity and its coupling with the enclosure in front. The MPP structure should be treated as part of the entire system, rather than an absorption boundary characterized by the surface impedance, calculated or measured in simple acoustic environment. Considering the spatial matching between the acoustic fields across the MPP, the possibility of attenuating particular enclosure resonances by partially covering the enclosure wall with a properly designed MPP structure is also demonstrated.

  12. Amplitude modulation detection by human listeners in sound fields.

    PubMed

    Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Srinivasan, Nirmal

    2011-10-01

    The temporal modulation transfer function (TMTF) approach allows techniques from linear systems analysis to be used to predict how the auditory system will respond to arbitrary patterns of amplitude modulation (AM). Although this approach forms the basis for a standard method of predicting speech intelligibility based on estimates of the acoustical modulation transfer function (MTF) between source and receiver, human sensitivity to AM as characterized by the TMTF has not been extensively studied under realistic listening conditions, such as in reverberant sound fields. Here, TMTFs (octave bands from 2 - 512 Hz) were obtained in 3 listening conditions simulated using virtual auditory space techniques: diotic, anechoic sound field, reverberant room sound field. TMTFs were then related to acoustical MTFs estimated using two different methods in each of the listening conditions. Both diotic and anechoic data were found to be in good agreement with classic results, but AM thresholds in the reverberant room were lower than predictions based on acoustical MTFs. This result suggests that simple linear systems techniques may not be appropriate for predicting TMTFs from acoustical MTFs in reverberant sound fields, and may be suggestive of mechanisms that functionally enhance modulation during reverberant listening.

  13. Prediction of break-out sound from a rectangular cavity via an elastically mounted panel.

    PubMed

    Wang, Gang; Li, Wen L; Du, Jingtao; Li, Wanyou

    2016-02-01

    The break-out sound from a cavity via an elastically mounted panel is predicted in this paper. The vibroacoustic system model is derived based on the so-called spectro-geometric method in which the solution over each sub-domain is invariably expressed as a modified Fourier series expansion. Unlike the traditional modal superposition methods, the continuity of the normal velocities is faithfully enforced on the interfaces between the flexible panel and the (interior and exterior) acoustic media. A fully coupled vibro-acoustic system is obtained by taking into account the strong coupling between the vibration of the elastic panel and the sound fields on the both sides. The typical time-consuming calculations of quadruple integrals encountered in determining the sound power radiation from a panel has been effectively avoided by reducing them, via discrete cosine transform, into a number of single integrals which are subsequently calculated analytically in a closed form. Several numerical examples are presented to validate the system model, understand the effects on the sound transmissions of panel mounting conditions, and demonstrate the dependence on the size of source room of the "measured" transmission loss.

  14. Ion sound instability driven by the ion flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koshkarov, O., E-mail: koshkarov.alexandr@usask.ca; Smolyakov, A. I.; National Research Centre

    2015-05-15

    Ion sound instabilities driven by the ion flow in a system of a finite length are considered by analytical and numerical methods. The ion sound waves are modified by the presence of stationary ion flow resulting in negative and positive energy modes. The instability develops due to coupling of negative and positive energy modes mediated by reflections from the boundary. It is shown that the wave dispersion due to deviation from quasineutrality is crucial for the stability. In finite length system, the dispersion is characterized by the length of the system measured in units of the Debye length. The instabilitymore » is studied analytically and the results are compared with direct, initial value numerical simulations.« less

  15. Hydrographic surveys of rivers and lakes using a multibeam echosounder mapping system

    USGS Publications Warehouse

    Huizinga, Richard J.; Heimann, David C.

    2018-06-12

    A multibeam echosounder is a type of sound navigation and ranging device that uses sound waves to “see” through even murky waters. Unlike a single beam echosounder (also known as a depth sounder or fathometer) that releases a single sound pulse in a single, narrow beam and “listens” for the return echo, a multibeam system emits a multidirectional radial beam to obtain information within a fan-shaped swath. The timing and direction of the returning sound waves provide detailed information on the depth of water and the shape of the river channel, lake bottom, or any underwater features of interest. This information has been used by the U.S. Geological Survey to efficiently generate high-resolution maps of river and lake bottoms.

  16. A reduced-order integral formulation to account for the finite size effect of isotropic square panels using the transfer matrix method.

    PubMed

    Bonfiglio, Paolo; Pompoli, Francesco; Lionti, Riccardo

    2016-04-01

    The transfer matrix method is a well-established prediction tool for the simulation of sound transmission loss and the sound absorption coefficient of flat multilayer systems. Much research has been dedicated to enhancing the accuracy of the method by introducing a finite size effect of the structure to be simulated. The aim of this paper is to present a reduced-order integral formulation to predict radiation efficiency and radiation impedance for a panel with equal lateral dimensions. The results are presented and discussed for different materials in terms of radiation efficiency, sound transmission loss, and the sound absorption coefficient. Finally, the application of the proposed methodology for rectangular multilayer systems is also investigated and validated against experimental data.

  17. Frequency-independent radiation modes of interior sound radiation: An analytical study

    NASA Astrophysics Data System (ADS)

    Hesse, C.; Vivar Perez, J. M.; Sinapius, M.

    2017-03-01

    Global active control methods of sound radiation into acoustic cavities necessitate the formulation of the interior sound field in terms of the surrounding structural velocity. This paper proposes an efficient approach to do this by presenting an analytical method to describe the radiation modes of interior sound radiation. The method requires no knowledge of the structural modal properties, which are often difficult to obtain in control applications. The procedure is exemplified for two generic systems of fluid-structure interaction, namely a rectangular plate coupled to a cuboid cavity and a hollow cylinder with the fluid in its enclosed cavity. The radiation modes are described as a subset of the acoustic eigenvectors on the structural-acoustic interface. For the two studied systems, they are therefore independent of frequency.

  18. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  19. The Integrated Sounding System: Description and Preliminary Observations from TOGA COARE.

    NASA Astrophysics Data System (ADS)

    Parsons, David; Dabberdt, Walter; Cole, Harold; Hock, Terrence; Martin, Charles; Barrett, Anne-Leslie; Miller, Erik; Spowart, Michael; Howard, Michael; Ecklund, Warner; Carter, David; Gage, Kenneth; Wilson, John

    1994-04-01

    An Integrated Sounding System (ISS) that combines state-of- the-art remote and in situ sensors into a single transportable facility has been developed jointly by the National Center for Atmospheric Research (NCAR) and the Aeronomy laboratory of the National Oceanic and Atmospheric Administration (NOAA/AL). The instrumentation for each ISS includes a 915-MHz wind profiler, a Radio Acoustic Sounding System (RASS), an Omega-based NAVAID sounding system, and an enhanced surface meteorological station. The general philosophy behind the ISS is that the integration of various measurement systems overcomes each system's respective limitations while taking advantage of its positive attributes. The individual observing systems within the ISS provide high-level data products to a central workstation that manages and integrates these measurements. The ISS software package performs a wide range of functions: real-time data acquisition, database support, and graphical displays; data archival and communications; and operational and post time analysis. The first deployment of the ISS consists of six sites in the western tropical Pacific-four land-based deployments and two ship-based deployments. The sites serve the Coupled Ocean-Atmosphere Response Experiment (COARE) of the Tropical Ocean and Global Atmosphere (TOGA) program and TOGA's enhanced atmospheric monitoring effort. Examples of ISS data taken during this deployment are shown in order to demonstrate the capabilities of this new sounding system and to demonstrate the performance of these in situ and remote sensing instruments in a moist tropical environment. In particular, a strong convective outflow with a pronounced impact of the atmospheric boundary layer and heat fluxes from the ocean surface was examined with a shipboard ISS. If these strong outflows commonly occur, they may prove to be an important component of the surface energy budget of the western tropical Pacific.

  20. Defense Acquisitions: Addressing Incentives is Key to Further Reform Efforts

    DTIC Science & Technology

    2014-04-30

    championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering . While some progress has been made...other reforms have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering . DOD’s declining...principles from disciplines such as systems engineering , as well as lessons learned and past reforms. The body of work we have done on benchmarking

  1. Steerable sound transport in a 3D acoustic network

    NASA Astrophysics Data System (ADS)

    Xia, Bai-Zhan; Jiao, Jun-Rui; Dai, Hong-Qing; Yin, Sheng-Wen; Zheng, Sheng-Jie; Liu, Ting-Ting; Chen, Ning; Yu, De-Jie

    2017-10-01

    Quasi-lossless and asymmetric sound transports, which are exceedingly desirable in various modern physical systems, are almost always based on nonlinear or angular momentum biasing effects with extremely high power levels and complex modulation schemes. A practical route for the steerable sound transport along any arbitrary acoustic pathway, especially in a three-dimensional (3D) acoustic network, can revolutionize the sound power propagation and the sound communication. Here, we design an acoustic device containing a regular-tetrahedral cavity with four cylindrical waveguides. A smaller regular-tetrahedral solid in this cavity is eccentrically emplaced to break spatial symmetry of the acoustic device. The numerical and experimental results show that the sound power flow can unimpededly transport between two waveguides away from the eccentric solid within a wide frequency range. Based on the quasi-lossless and asymmetric transport characteristic of the single acoustic device, we construct a 3D acoustic network, in which the sound power flow can flexibly propagate along arbitrary sound pathways defined by our acoustic devices with eccentrically emplaced regular-tetrahedral solids.

  2. Aquatic Acoustic Metrics Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-12-18

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals.more » In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less

  3. The Specificity of Sound Symbolic Correspondences in Spoken Language.

    PubMed

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2017-11-01

    Although language has long been regarded as a primarily arbitrary system, sound symbolism, or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these mappings. This study investigated whether sound symbolic properties correspond to specific meanings, or whether these properties generalize across semantic dimensions. In three experiments, native English-speaking adults heard sound symbolic foreign words for dimensional adjective pairs (big/small, round/pointy, fast/slow, moving/still) and for each foreign word, selected a translation among English antonyms that either matched or mismatched with the correct meaning dimension. Listeners agreed more reliably on the English translation for matched relative to mismatched dimensions, though reliable cross-dimensional mappings did occur. These findings suggest that although sound symbolic properties generalize to meanings that may share overlapping semantic features, sound symbolic mappings offer semantic specificity. Copyright © 2016 Cognitive Science Society, Inc.

  4. Memory for product sounds: the effect of sound and label type.

    PubMed

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  5. Pitch features of environmental sounds

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Kang, Jian

    2016-07-01

    A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.

  6. [Perception and selectivity of sound duration in the central auditory midbrain].

    PubMed

    Wang, Xin; Li, An-An; Wu, Fei-Jian

    2010-08-25

    Sound duration plays important role in acoustic communication. Information of acoustic signal is mainly encoded in the amplitude and frequency spectrum of different durations. Duration selective neurons exist in the central auditory system including inferior colliculus (IC) of frog, bat, mouse and chinchilla, etc., and they are important in signal recognition and feature detection. Two generally accepted models, which are "coincidence detector model" and "anti-coincidence detector model", have been raised to explain the mechanism of neural selective responses to sound durations based on the study of IC neurons in bats. Although they are different in details, they both emphasize the importance of synaptic integration of excitatory and inhibitory inputs, and are able to explain the responses of most duration-selective neurons. However, both of the hypotheses need to be improved since other sound parameters, such as spectral pattern, amplitude and repetition rate, could affect the duration selectivity of the neurons. The dynamic changes of sound parameters are believed to enable the animal to effectively perform recognition of behavior related acoustic signals. Under free field sound stimulation, we analyzed the neural responses in the IC and auditory cortex of mouse and bat to sounds with different duration, frequency and amplitude, using intracellular or extracellular recording techniques. Based on our work and previous studies, this article reviews the properties of duration selectivity in central auditory system and discusses the mechanisms of duration selectivity and the effect of other sound parameters on the duration coding of auditory neurons.

  7. Situational Lightning Climatologies for Central Florida: Phase III

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III

    2008-01-01

    This report describes work done by the Applied Meteorology Unit (AMU) to add composite soundings to the Advanced Weather Interactive Processing System (AWIPS). This allows National Weather Service (NWS) forecasters to compare the current atmospheric state with climatology. In a previous phase, the AMU created composite soundings for four rawinsonde observation stations in Florida, for each of eight flow regimes. The composite soundings were delivered to the NWS Melbourne (MLB) office for display using the NSHARP software program. NWS MLB requested that the AMU make the composite soundings available for display in AWIPS. The AMU first created a procedure to customize AWIPS so composite soundings could be displayed. A unique four-character identifier was created for each of the 32 composite soundings. The AMU wrote a Tool Command Language/Tool Kit (TcVTk) software program to convert the composite soundings from NSHARP to Network Common Data Form (NetCDF) format. The NetCDF files were then displayable by AWIPS.

  8. Stridulatory sound-production and its function in females of the cicada Subpsaltria yangi.

    PubMed

    Luo, Changqing; Wei, Cong

    2015-01-01

    Acoustic behavior plays a crucial role in many aspects of cicada biology, such as reproduction and intrasexual competition. Although female sound production has been reported in some cicada species, acoustic behavior of female cicadas has received little attention. In cicada Subpsaltria yangi, the females possess a pair of unusually well-developed stridulatory organs. Here, sound production and its function in females of this remarkable cicada species were investigated. We revealed that the females could produce sounds by stridulatory mechanism during pair formation, and the sounds were able to elicit both acoustic and phonotactic responses from males. In addition, the forewings would strike the body during performing stridulatory sound-producing movements, which generated impact sounds. Acoustic playback experiments indicated that the impact sounds played no role in the behavioral context of pair formation. This study provides the first experimental evidence that females of a cicada species can generate sounds by stridulatory mechanism. We anticipate that our results will promote acoustic studies on females of other cicada species which also possess stridulatory system.

  9. Neuro-cognitive aspects of "OM" sound/syllable perception: A functional neuroimaging study.

    PubMed

    Kumar, Uttam; Guleria, Anupam; Khetrapal, Chunni Lal

    2015-01-01

    The sound "OM" is believed to bring mental peace and calm. The cortical activation associated with listening to sound "OM" in contrast to similar non-meaningful sound (TOM) and listening to a meaningful Hindi word (AAM) has been investigated using functional magnetic resonance imaging (MRI). The behaviour interleaved gradient technique was employed in order to avoid interference of scanner noise. The results reveal that listening to "OM" sound in contrast to the meaningful Hindi word condition activates areas of bilateral cerebellum, left middle frontal gyrus (dorsolateral middle frontal/BA 9), right precuneus (BA 5) and right supramarginal gyrus (SMG). Listening to "OM" sound in contrast to "non-meaningful" sound condition leads to cortical activation in bilateral middle frontal (BA9), right middle temporal (BA37), right angular gyrus (BA 40), right SMG and right superior middle frontal gyrus (BA 8). The conjunction analysis reveals that the common neural regions activated in listening to "OM" sound during both conditions are middle frontal (left dorsolateral middle frontal cortex) and right SMG. The results correspond to the fact that listening to "OM" sound recruits neural systems implicated in emotional empathy.

  10. Production Accuracy in a Young Cochlear Implant Recipient

    ERIC Educational Resources Information Center

    Warner-Czyz, Andrea D.; Davis, Barbara L.; Morrison, Helen M.

    2005-01-01

    The availability of cochlear implants in younger children has provided the opportunity to evaluate the relative impact of the production system, or the sounds young children can say, and the auditory system, or the sounds children can hear, on early vocal communication. Limited access to the acoustic properties of speech results in differences in…

  11. Educational Support System for Experiments Involving Construction of Sound Processing Circuits

    ERIC Educational Resources Information Center

    Takemura, Atsushi

    2012-01-01

    This paper proposes a novel educational support system for technical experiments involving the production of practical electronic circuits for sound processing. To support circuit design and production, each student uses a computer during the experiments, and can learn circuit design, virtual circuit making, and real circuit making. In the…

  12. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 5 2014-10-01 2014-10-01 false Location and operation of sound level measurement systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL REGULATIONS COMPLIANCE WITH INTERSTATE MOTOR...

  13. Methods of recording and analysing cough sounds.

    PubMed

    Subburaj, S; Parvez, L; Rajagopalan, T G

    1996-01-01

    Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.

  14. Development of the Low-cost Analog-to-Digital Converter (for nuclear physics experiments) with PC sound card

    NASA Astrophysics Data System (ADS)

    Sugihara, Kenkoh

    2009-10-01

    A low-cost ADC (Analogue-to-Digital Converter) with shaping embedded for undergraduate physics laboratory is developed using a home made circuit and a PC sound card. Even though an ADC is needed as an essential part of an experimental set up, commercially available ones are very expensive and are scarce for undergraduate laboratory experiments. The system that is developed from the present work is designed for a gamma-ray spectroscopy laboratory with NaI(Tl) counters, but not limited. For this purpose, the system performance is set to sampling rate of 1-kHz with 10-bit resolution using a typical PC sound card with 41-kHz or higher sampling rate and 16-bit resolution ADC with an addition of a shaping circuit. Details of the system and the status of development will be presented. Ping circuit and PC soundcard as typical PC sound card has 41.1kHz or heiger sampling rate and 16bit resolution ADCs. In the conference details of the system and the status of development will be presented.

  15. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  16. Neural Correlates of Phonological Processing in Speech Sound Disorder: A Functional Magnetic Resonance Imaging Study

    ERIC Educational Resources Information Center

    Tkach, Jean A.; Chen, Xu; Freebairn, Lisa A.; Schmithorst, Vincent J.; Holland, Scott K.; Lewis, Barbara A.

    2011-01-01

    Speech sound disorders (SSD) are the largest group of communication disorders observed in children. One explanation for these disorders is that children with SSD fail to form stable phonological representations when acquiring the speech sound system of their language due to poor phonological memory (PM). The goal of this study was to examine PM in…

  17. THE SOUNDS OF ENGLISH AND ITALIAN, A SYSTEMATIC ANALYSIS OF THE CONTRASTS BETWEEN THE SOUND SYSTEMS. CONTRASTIVE STRUCTURE SERIES.

    ERIC Educational Resources Information Center

    AGARD, FREDERICK B.; DI PIETRO, ROBERT J.

    DESIGNED AS A SOURCE OF INFORMATION FOR PROFESSIONALS PREPARING INSTRUCTIONAL MATERIALS, PLANNING COURSES, OR DEVELOPING CLASSROOM TECHNIQUES FOR FOREIGN LANGUAGE PROGRAMS, A SERIES OF STUDIES HAS BEEN PREPARED THAT CONTRASTS, IN TWO VOLUMES FOR EACH OF THE FIVE MOST COMMONLY TAUGHT FOREIGN LANGUAGES IN THE UNITED STATES, THE SOUND AND GRAMMATICAL…

  18. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  19. Whale contribution to long time series of low-frequency oceanic ambient sound

    NASA Astrophysics Data System (ADS)

    Andrew, Rex K.; Howe, Bruce M.; Mercer, James A.

    2002-05-01

    It has long been known that baleen (mainly blue and fin) whale vocalizations are a component of oceanic ambient sound. Urick reports that the famous ``20-cycle pulses'' were observed even from the first Navy hydrophone installations in the early 1950's. As part of the Acoustic Thermometry Ocean Climate (ATOC) and the North Pacific Acoustic Laboratory (NPAL) programs, more than 6 years of nearly continuous ambient sound data have been collected from Sound Surveillance System (SOSUS) sites in the northeast Pacific. These records now show that the average level of the ambient sound has risen by as much as 10 dB since the 1960's. Although much of this increase is probably attributable to manmade sources, the whale call component is still prominent. The data also show that the whale signal is clearly seasonal: in coherent averages of year-long records, the whale call signal is the only feature that stands out, making strong and repeatable patterns as the whale population migrates past the hydrophone systems. This prominent and sometimes dominant component of ambient sound has perhaps not been fully appreciated in current ambient noise models. [Work supported by ONR.

  20. GPS Sounding Rocket Developments

    NASA Technical Reports Server (NTRS)

    Bull, Barton

    1999-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data including; chemical makeup and physical processes taking place In the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft prior to their use in more expensive activities. The NASA Sounding Rocket Program is managed by personnel from Goddard Space Flight Center Wallops Flight Facility (GSFC/WFF) in Virginia. Typically around thirty of these rockets are launched each year, either from established ranges at Wallops Island, Virginia, Poker Flat Research Range, Alaska; White Sands Missile Range, New Mexico or from Canada, Norway and Sweden. Many times launches are conducted from temporary launch ranges in remote parts of the world requi6ng considerable expense to transport and operate tracking radars. An inverse differential GPS system has been developed for Sounding Rocket. This paper addresses the NASA Wallops Island history of GPS Sounding Rocket experience since 1994 and the development of a high accurate and useful system.

  1. Impacts of distinct observations during the 2009 Prince William Sound field experiment: A data assimilation study

    NASA Astrophysics Data System (ADS)

    Li, Zhijin; Chao, Yi; Farrara, John D.; McWilliams, James C.

    2013-07-01

    A set of data assimilation experiments, known as Observing System Experiments (OSEs) are performed to assess the relative impacts of different types of observations acquired during the 2009 Prince William Sound Field Experiment. The observations assimilated consist primarily of two types: High Frequency (HF) radar surface velocities and vertical profiles of temperature/salinity (T/S) measured by ships, moorings, an Autonomous Underwater Vehicle and a glider. The impact of all the observations, HF radar surface velocities, and T/S profiles is assessed. Without data assimilation, a frequently occurring cyclonic eddy in the central Sound is overly persistent and intense. The assimilation of the HF radar velocities effectively reduces these biases and improves the representation of the velocities as well as the T/S fields in the Sound. The assimilation of the T/S profiles improves the large scale representation of the temperature/salinity and also the velocity field in the central Sound. The combination of the HF radar surface velocities and sparse T/S profiles results in an observing system capable of representing the circulation in the Sound reliably and thus producing analyses and forecasts with useful skill.

  2. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  3. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation.

    PubMed

    Kumeta, Masahiro; Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound.

  4. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation

    PubMed Central

    Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H.

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound. PMID:29385174

  5. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  6. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency.

    PubMed

    Branstetter, Brian K; DeLong, Caroline M; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.

  7. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency

    PubMed Central

    Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519

  8. 75 FR 11862 - Endangered Species; File No. 14759

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ... Fear) and estuaries (Albemarle Sound) using non-lethal sampling methods combining hydroacoustic surveys..., Neuse, Cape Fear river systems and Albemarle Sound, and up to 20 shortnose sturgeon from the Roanoke...

  9. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    PubMed Central

    Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038

  10. Speech perception benefits of FM and infrared devices to children with hearing aids in a typical classroom.

    PubMed

    Anderson, Karen L; Goldstein, Howard

    2004-04-01

    Children typically learn in classroom environments that have background noise and reverberation that interfere with accurate speech perception. Amplification technology can enhance the speech perception of students who are hard of hearing. This study used a single-subject alternating treatments design to compare the speech recognition abilities of children who are, hard of hearing when they were using hearing aids with each of three frequency modulated (FM) or infrared devices. Eight 9-12-year-olds with mild to severe hearing loss repeated Hearing in Noise Test (HINT) sentence lists under controlled conditions in a typical kindergarten classroom with a background noise level of +10 dB signal-to-noise (S/N) ratio and 1.1 s reverberation time. Participants listened to HINT lists using hearing aids alone and hearing aids in combination with three types of S/N-enhancing devices that are currently used in mainstream classrooms: (a) FM systems linked to personal hearing aids, (b) infrared sound field systems with speakers placed throughout the classroom, and (c) desktop personal sound field FM systems. The infrared ceiling sound field system did not provide benefit beyond that provided by hearing aids alone. Desktop and personal FM systems in combination with personal hearing aids provided substantial improvements in speech recognition. This information can assist in making S/N-enhancing device decisions for students using hearing aids. In a reverberant and noisy classroom setting, classroom sound field devices are not beneficial to speech perception for students with hearing aids, whereas either personal FM or desktop sound field systems provide listening benefits.

  11. Decreased sound tolerance: hyperacusis, misophonia, diplacousis, and polyacousis.

    PubMed

    Jastreboff, Pawel J; Jastreboff, Margaret M

    2015-01-01

    Definitions, potential mechanisms, and treatments for decreased sound tolerance, hyperacusis, misophonia, and diplacousis are presented with an emphasis on the associated physiologic and neurophysiological processes and principles. A distinction is made between subjects who experience these conditions versus patients who suffer from them. The role of the limbic and autonomic nervous systems and other brain systems involved in cases of bothersome decreased sound tolerance is stressed. The neurophysiological model of tinnitus is outlined with respect to how it may contribute to our understanding of these phenomena and their treatment. © 2015 Elsevier B.V. All rights reserved.

  12. Operating a Geiger Müller tube using a PC sound card

    NASA Astrophysics Data System (ADS)

    Azooz, A. A.

    2009-01-01

    In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Müller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the computer sound card via the line in port. All standard GM experiments, pulse shape and statistical analysis experiments can be carried out using this system. A new visual demonstration of dead time effects is also presented.

  13. Acoustic analysis of trill sounds.

    PubMed

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  14. Acoustic positioning for space processing experiments

    NASA Technical Reports Server (NTRS)

    Whymark, R. R.

    1974-01-01

    An acoustic positioning system is described that is adaptable to a range of processing chambers and furnace systems. Operation at temperatures exceeding 1000 C is demonstrated in experiments involving the levitation of liquid and solid glass materials up to several ounces in weight. The system consists of a single source of sound that is beamed at a reflecting surface placed a distance away. Stable levitation is achieved at a succession of discrete energy minima contained throughout the volume between the reflector and the sound source. Several specimens can be handled at one time. Metal discs up to 3 inches in diameter can be levitated, solid spheres of dense material up to 0.75 inches diameter, and liquids can be freely suspended in l-g in the form of near-spherical droplets up to 0.25 inch diameter, or flattened liquid discs up to 0.6 inches diameter. Larger specimens may be handled by increasing the size of the sound source or by reducing the sound frequency.

  15. NCAR Integrated Sounding System Observations during the SOAS / SAS Field Campaign

    NASA Astrophysics Data System (ADS)

    Brown, W. O.; Moore, J.

    2013-12-01

    The National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL) deployed an Integrated Sounding Systems (ISS) for the SOAS (Southern Oxidant and Aerosol Study) field campaign in Alabama in the summer of 2013. The ISS was split between two sites: a former NWS site approximately 1km from the main SOAS chemistry ground site near Centerville AL, and about 20km to the south at the Alabama fish hatchery site approximately 1km from the flux tower site near Marion, AL. At the former-NWS site we launched 106 radiosonde soundings, operated a 915 MHz boundary layer radar wind profiler with RASS (Radio Acoustic Sounding System), ceilometer and various surface meteorological sensors. At the AABC site we operated a Lesosphere WIndcube 200S Doppler lidar and a Metek mini-Doppler sodar. Other NCAR facilities at the AABC site included a 45-m instrumented flux tower. This poster will present a sampling observations made by these instruments, including examples of boundary layer evolution and structure, and summarize the performance of the instrumentation.

  16. Optimum sensor placement for microphone arrays

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.

    Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.

  17. Understanding the neurophysiological basis of auditory abilities for social communication: a perspective on the value of ethological paradigms.

    PubMed

    Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C

    2013-11-01

    Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  18. A study of low-cost, robust assistive listening system (ALS) based on digital wireless technology.

    PubMed

    Israsena, P; Dubsok, P; Pan-Ngum, S

    2008-11-01

    We have developed a simple, low-cost digital wireless broadcasting system prototype, intended for a classroom of hearing impaired students. The system is designed to be a low-cost alternative to an existing FM system. The system implemented is for short-range communication, with a one-transmitter, multiple-receiver configuration, which is typical for these classrooms. The data is source-coded for voice-band quality, FSK modulated, and broadcasted via a 915 MHz radio frequency. A DES encryption can optionally be added for better information security. Test results show that the system operating range is approximately ten metres, and the sound quality is close to telephone quality as intended. We also discuss performance issues such as sound, power and size, as well as transmission protocols. The test results are the proof of concept that the prototype is a viable alternative to an existing FM system. Improvements can be made to the system's sound quality via techniques such as channel coding, which is also discussed.

  19. Texture-dependent effects of pseudo-chewing sound on perceived food texture and evoked feelings in response to nursing care foods.

    PubMed

    Endo, Hiroshi; Ino, Shuichi; Fujisaki, Waka

    2017-09-01

    Because chewing sounds influence perceived food textures, unpleasant textures of texture-modified diets might be improved by chewing sound modulation. Additionally, since inhomogeneous food properties increase perceived sensory intensity, the effects of chewing sound modulation might depend on inhomogeneity. This study examined the influences of texture inhomogeneity on the effects of chewing sound modulation. Three kinds of nursing care foods in two food process types (minced-/puréed-like foods for inhomogeneous/homogeneous texture respectively) were used as sample foods. A pseudo-chewing sound presentation system, using electromyogram signals, was used to modulate chewing sounds. Thirty healthy elderly participants participated in the experiment. In two conditions with and without the pseudo-chewing sound, participants rated the taste, texture, and evoked feelings in response to sample foods. The results showed that inhomogeneity strongly influenced the perception of food texture. Regarding the effects of the pseudo-chewing sound, taste was less influenced, the perceived food texture tended to change in the minced-like foods, and evoked feelings changed in both food process types. Though there were some food-dependent differences in the effects of the pseudo-chewing sound, the presentation of the pseudo-chewing sounds was more effective in foods with an inhomogeneous texture. In addition, it was shown that the pseudo-chewing sound might have positively influenced feelings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Navajo-English Dictionary.

    ERIC Educational Resources Information Center

    Wall, Leon; Morgan, William

    A brief summary of the sound system of the Navajo language introduces this Navajo-English dictionary. Diacritical markings and an English definition are given for each Navajo word. Words are listed alphabetically by Navajo sound. (VM)

  1. Ground and space flight experiments of the effects of light, sound and/or temperature on animals

    NASA Technical Reports Server (NTRS)

    Holley, Daniel C.; Du, Vince; Erikson, Jill; Gott, Jack; Hinchcliffe, Heather; Mele, Gary; Moeller, Karen; Nguyen, Tam; Okumura, Sarah; Robbins, Mark

    1994-01-01

    Papers on the following topics are presented: (1) rat long term habitability and breeding under low light intensity (5 lux); (2) effects of low light intensity on the rat circadian system; (3) effects of sound/noise on the circadian system of rats; (4) temperature related problems involving the animal enclosure modules (AEM) lighting system; and (5) NASA AEM filter test 92/93 (Rats).

  2. Dynamic Analysis of Sounding Rocket Pneumatic System Revision

    NASA Technical Reports Server (NTRS)

    Armen, Jerald

    2010-01-01

    The recent fusion of decades of advancements in mathematical models, numerical algorithms and curve fitting techniques marked the beginning of a new era in the science of simulation. It is becoming indispensable to the study of rockets and aerospace analysis. In pneumatic system, which is the main focus of this paper, particular emphasis will be placed on the efforts of compressible flow in Attitude Control System of sounding rocket.

  3. A training system of orientation and mobility for blind people using acoustic virtual reality.

    PubMed

    Seki, Yoshikazu; Sato, Tetsuji

    2011-02-01

    A new auditory orientation training system was developed for blind people using acoustic virtual reality (VR) based on a head-related transfer function (HRTF) simulation. The present training system can reproduce a virtual training environment for orientation and mobility (O&M) instruction, and the trainee can walk through the virtual training environment safely by listening to sounds such as vehicles, stores, ambient noise, etc., three-dimensionally through headphones. The system can reproduce not only sound sources but also sound reflection and insulation, so that the trainee can learn both sound location and obstacle perception skills. The virtual training environment is described in extensible markup language (XML), and the O&M instructor can edit it easily according to the training curriculum. Evaluation experiments were conducted to test the efficiency of some features of the system. Thirty subjects who had not acquired O&M skills attended the experiments. The subjects were separated into three groups: a no-training group, a virtual-training group using the present system, and a real-training group in real environments. The results suggested that virtual-training can reduce "veering" more than real-training and also can reduce stress as much as real training. The subjective technical and anxiety scores also improved.

  4. Sounds of silence: How to animate virtual worlds with sound

    NASA Technical Reports Server (NTRS)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  5. Intracortical circuits amplify sound-evoked activity in primary auditory cortex following systemic injection of salicylate in the rat

    PubMed Central

    Chrostowski, Michael; Salvi, Richard J.; Allman, Brian L.

    2012-01-01

    A high dose of sodium salicylate temporarily induces tinnitus, mild hearing loss, and possibly hyperacusis in humans and other animals. Salicylate has well-established effects on cochlear function, primarily resulting in the moderate reduction of auditory input to the brain. Despite decreased peripheral sensitivity and output, salicylate induces a paradoxical enhancement of the sound-evoked field potential at the level of the primary auditory cortex (A1). Previous electrophysiologic studies have begun to characterize changes in thalamorecipient layers of A1; however, A1 is a complex neural circuit with recurrent intracortical connections. To describe the effects of acute systemic salicylate treatment on both thalamic and intracortical sound-driven activity across layers of A1, we applied current-source density (CSD) analysis to field potentials sampled across cortical layers in the anesthetized rat. CSD maps were normally characterized by a large, short-latency, monosynaptic, thalamically driven sink in granular layers followed by a lower amplitude, longer latency, polysynaptic, intracortically driven sink in supragranular layers. Following systemic administration of salicylate, there was a near doubling of both granular and supragranular sink amplitudes at higher sound levels. The supragranular sink amplitude input/output function changed from becoming asymptotic at approximately 50 dB to sharply nonasymptotic, often dominating the granular sink amplitude at higher sound levels. The supragranular sink also exhibited a significant decrease in peak latency, reflecting an acceleration of intracortical processing of the sound-evoked response. Additionally, multiunit (MU) activity was altered by salicylate; the normally onset/sustained MU response type was transformed into a primarily onset response type in granular and infragranular layers. The results from CSD analysis indicate that salicylate significantly enhances sound-driven response via intracortical circuits. PMID:22496535

  6. Pocomoke Sound Sedimentary and Ecosystem History

    USGS Publications Warehouse

    Cronin, Thomas M.

    2004-01-01

    Summary of Results: Pocomoke Sound Sediment and Sediment Processes Transport of sediment from coastal marshes. Analyses of pollen and foraminifera from surface sediments in Pocomoke Sound suggest that neither the upstream forested wetlands nor coastal marshes bordering the sound have contributed appreciably to particulate matter in the 10- to 1000-micron size range that is currently being deposited in the sound. Sediment processes derived from short-lived isotope. Analyses of beryllium-7, cesium-137 and lead-210 and redox sensitive elements from Pocomoke sediments showed that there has been a significant increase in anthropogenic elements since the late 1940's when the Delmarva Peninsula became more accessible from the Baltimore-Washington region. Cesium-137 was found to be a useful tool to determine changes in sedimentation within the system. Three major stages of sedimentation occurred. Before 1950, the system was equilibrium with the agriculture activity in the watershed, whereas urbanization and agricultural activity changes during and immediately preceding World War II resulted in increased sediment flux. Around 1970, the sediment flux diminished and there was an apparent increase in bank erosion sediment to the deeper parts of the system. Rates of sediment deposition. Radiocarbon, lead-210, and pollen dating of sediment cores from Pocomoke Sound indicate relatively continuous deposition of fine-grained sediments in the main Pocomoke channel at > ~7 m water depths. Mean sediment accumulation rates during the past few centuries were relatively high (>1 cm yr -1 ). The ages of coarser-grained sediments (sands) blanketing the shallow (4.0 cm yr -1 ) at most sites throughout the Sound in post-Colonial time. These results confirm those from other regions of the bay that land-clearance increased the flux of river-borne sediment to certain r

  7. Ultra-thin metamaterial for perfect and quasi-omnidirectional sound absorption

    NASA Astrophysics Data System (ADS)

    Jiménez, N.; Huang, W.; Romero-García, V.; Pagneux, V.; Groby, J.-P.

    2016-09-01

    Using the concepts of slow sound and critical coupling, an ultra-thin acoustic metamaterial panel for perfect and quasi-omnidirectional absorption is theoretically and experimentally conceived in this work. The system is made of a rigid panel with a periodic distribution of thin closed slits, the upper wall of which is loaded by Helmholtz Resonators (HRs). The presence of resonators produces a slow sound propagation shifting the resonance frequency of the slit to the deep sub-wavelength regime ( λ/88 ). By controlling the geometry of the slit and the HRs, the intrinsic visco-thermal losses can be tuned in order to exactly compensate the energy leakage of the system and fulfill the critical coupling condition to create the perfect absorption of sound in a large range of incidence angles due to the deep subwavelength behavior.

  8. Hydrodynamic ion sound instability in systems of a finite length

    NASA Astrophysics Data System (ADS)

    Koshkarov, O.; Chapurin, O.; Smolyakov, A.; Kaganovich, I.; Ilgisonis, V.

    2016-09-01

    Plasmas permeated by an energetic ion beam is prone to the kinetic ion-sound instability that occurs as a result of the inverse Landau damping for ion velocity. It is shown here that in a finite length system there exists another type of the ion sound instability which occurs for v02

  9. Electronic filters, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor)

    1995-01-01

    An electronic filter for an electroacoustic system. The system has a microphone for generating an electrical output from external sounds and an electrically driven transducer for emitting sound. Some of the sound emitted by the transducer returns to the microphone means to add a feedback contribution to its electrical output. The electronic filter includes a first circuit for electronic processing of the electrical output of the microphone to produce a first signal. An adaptive filter, interconnected with the first circuit, performs electronic processing of the first signal to produce an adaptive output to the first circuit to substantially offset the feedback contribution in the electrical output of the microphone, and the adaptive filter includes means for adapting only in response to polarities of signals supplied to and from the first circuit. Other electronic filters for hearing aids, public address systems and other electroacoustic systems, as well as such systems and methods of operating them are also disclosed.

  10. Electronic filters, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor); Zheng, Baohua (Inventor)

    1991-01-01

    An electronic filter for an electroacoustic system. The system has a microphone for generating an electrical output from external sounds and an electrically driven transducer for emitting sound. Some of the sound emitted by the transducer returns to the microphone means to add a feedback contribution to its electical output. The electronic filter includes a first circuit for electronic processing of the electrical output of the microphone to produce a filtered signal. An adaptive filter, interconnected with the first circuit, performs electronic processing of the filtered signal to produce an adaptive output to the first circuit to substantially offset the feedback contribution in the electrical output of the microphone, and the adaptive filter includes means for adapting only in response to polarities of signals supplied to and from the first circuit. Other electronic filters for hearing aids, public address systems and other electroacoustic systems, as well as such systems, and methods of operating them are also disclosed.

  11. 40 CFR 63.11527 - What are the monitoring requirements for new and existing sources?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... alarm that will sound when an increase in relative PM loadings is detected over the alarm set point... operating a bag leak detection system, if an alarm sounds, conduct visual monitoring of the monovent or... maintain a continuous parameter monitoring system (CPMS) to measure and record the 3-hour average pressure...

  12. Conceptual Sound System Design for Clifford Odets' "GOLDEN BOY"

    NASA Astrophysics Data System (ADS)

    Yang, Yen Chun

    There are two different aspects in the process of sound design, "Arts" and "Science". In my opinion, the sound design should engage both aspects strongly and in interaction with each other. I started the process of designing the sound for GOLDEN BOY by building the city soundscape of New York City in 1937. The scenic design for this piece is designed in the round, putting the audience all around the stage; this gave me a great opportunity to use surround and specialization techniques to transform the space into a different sonic world. My specialization design is composed of two subsystems -- one is the four (4) speakers center cluster diffusing towards the four (4) sections of audience, and the other is the four (4) speakers on the four (4) corners of the theatre. The outside ring provides rich sound source localization and the inside ring provides more support for control of the specialization details. In my design four (4) lavalier microphones are hung under the center iron cage from the four (4) corners of the stage. Each microphone is ten (10) feet above the stage. The signal for each microphone is sent to the two (2) center speakers in the cluster diagonally opposite the microphone. With the appropriate level adjustment of the microphones, the audience will not notice the amplification of the voices; however, through my specialization system, the presence and location of the voices of all actors are preserved for all audiences clearly. With such vocal reinforcements provided by the microphones, I no longer need to worry about overwhelming the dialogue on stage by the underscoring. A successful sound system design should not only provide a functional system, but also take the responsibility of bringing actors' voices to the audience and engaging the audience with the world that we create on stage. By designing a system which reinforces the actors' voices while at the same time providing control over localization of movement of sound effects, I was able not only to make the text present and clear for the audiences, but also to support the storyline strongly through my composed music, environmental soundscapes, and underscoring.

  13. Radiometric sounding system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteman, C.D.; Anderson, G.A.; Alzheimer, J.M.

    1995-04-01

    Vertical profiles of solar and terrestrial radiative fluxes are key research needs for global climate change research. These fluxes are expected to change as radiatively active trace gases are emitted to the earth`s atmosphere as a consequence of energy production and industrial and other human activities. Models suggest that changes in the concentration of such gases will lead to radiative flux divergences that will produce global warming of the earth`s atmosphere. Direct measurements of the vertical variation of solar and terrestrial radiative fluxes that lead to these flux divergences have been largely unavailable because of the expense of making suchmore » measurements from airplanes. These measurements are needed to improve existing atmospheric radiative transfer models, especially under the cloudy conditions where the models have not been adequately tested. A tethered-balloon-borne Radiometric Sounding System has been developed at Pacific Northwest Laboratory to provide an inexpensive means of making routine vertical soundings of radiative fluxes in the earth`s atmospheric boundary layer to altitudes up to 1500 m above ground level. Such vertical soundings would supplement measurements being made from aircraft and towers. The key technical challenge in the design of the Radiometric Sounding System is to develop a means of keeping the radiometers horizontal while the balloon ascends and descends in a turbulent atmospheric environment. This problem has been addressed by stabilizing a triangular radiometer-carrying platform that is carried on the tetherline of a balloon sounding system. The platform, carried 30 m or more below the balloon to reduce the balloon`s effect on the radiometric measurements, is leveled by two automatic control loops that activate motors, gears and pulleys when the platform is off-level. The sensitivity of the automatic control loops to oscillatory motions of various frequencies and amplitudes can be adjusted using filters.« less

  14. Identification and tracking of particular speaker in noisy environment

    NASA Astrophysics Data System (ADS)

    Sawada, Hideyuki; Ohkado, Minoru

    2004-10-01

    Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.

  15. Cherenkov sound on a surface of a topological insulator

    NASA Astrophysics Data System (ADS)

    Smirnov, Sergey

    2013-11-01

    Topological insulators are currently of considerable interest due to peculiar electronic properties originating from helical states on their surfaces. Here we demonstrate that the sound excited by helical particles on surfaces of topological insulators has several exotic properties fundamentally different from sound propagating in nonhelical or even isotropic helical systems. Specifically, the sound may have strictly forward propagation absent for isotropic helical states. Its dependence on the anisotropy of the realistic surface states is of distinguished behavior which may be used as an alternative experimental tool to measure the anisotropy strength. Fascinating from the fundamental point of view backward, or anomalous, Cherenkov sound is excited above the critical angle π/2 when the anisotropy exceeds a critical value. Strikingly, at strong anisotropy the sound localizes into a few forward and backward beams propagating along specific directions.

  16. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    PubMed Central

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  17. Changes in cochlear blood flow in mice due to loud sound exposure measured with Doppler optical microangiography and laser Doppler flowmetry.

    PubMed

    Reif, Roberto; Zhi, Zhongwei; Dziennis, Suzan; Nuttall, Alfred L; Wang, Ruikang K

    2013-10-01

    In this work we determined the contributions of loud sound exposure (LSE) on cochlear blood flow (CoBF) in an in vivo anesthetized mouse model. A broadband noise system (20 kHz bandwidth) with an intensity of 119 dB SPL, was used for a period of one hour to produce a loud sound stimulus. Two techniques were used to study the changes in blood flow, a Doppler optical microangiography (DOMAG) system; which can measure the blood flow within individual cochlear vessels, and a laser Doppler flowmetry (LDF) system; which averages the blood flow within a volume (a hemisphere of ~1.5 mm radius) of tissue. Both systems determined that the blood flow within the cochlea is reduced due to the LSE stimulation.

  18. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    PubMed

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  19. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621

  20. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  1. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  2. Puget Sound Operational Forecast System - A Real-time Predictive Tool for Marine Resource Management and Emergency Responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang; Chase, Jared M.

    2009-12-01

    To support marine ecological resource management and emergency response and to enhance scientific understanding of physical and biogeochemical processes in Puget Sound, a real-time Puget Sound Operational Forecast System (PS-OFS) was developed by the Coastal Ocean Dynamics & Ecosystem Modeling group (CODEM) of Pacific Northwest National Laboratory (PNNL). PS-OFS employs the state-of-the-art three-dimensional coastal ocean model and closely follows the standards and procedures established by National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (NOS). PS-OFS consists of four key components supporting the Puget Sound Circulation and Transport Model (PS-CTM): data acquisition, model execution and product archive, model skill assessment,more » and model results dissemination. This paper provides an overview of PS-OFS and its ability to provide vital real-time oceanographic information to the Puget Sound community. PS-OFS supports pacific northwest region’s growing need for a predictive tool to assist water quality management, fish stock recovery efforts, maritime emergency response, nearshore land-use planning, and the challenge of climate change and sea level rise impacts. The structure of PS-OFS and examples of the system inputs and outputs, forecast results are presented in details.« less

  3. A closed-loop automatic control system for high-intensity acoustic test systems.

    NASA Technical Reports Server (NTRS)

    Slusser, R. A.

    1973-01-01

    Sound at sound pressure levels in the range from 130 to 160 dB is used in the investigation. Random noise is passed through a series of parallel filters, generally 1/3-octave wide. A basic automatic system is investigated because of preadjustment inaccuracies and high costs found in a study of a typical manually controlled acoustic testing system. The unit described has been successfully used in automatic acoustic tests in connection with the spacecraft tests for the Mariner 1971 program.

  4. Exterior sound level measurements of snowcoaches at Yellowstone National Park

    DOT National Transportation Integrated Search

    2010-04-01

    Sounds associated with oversnow vehicles, such as snowmobiles and snowcoaches, are an important management concern at Yellowstone and Grand Teton National Parks. The John A. Volpe National Transportation Systems Centers Environmental Measurement a...

  5. Active room compensation for sound reinforcement using sound field separation techniques.

    PubMed

    Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena

    2018-03-01

    This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.

  6. Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis

    PubMed Central

    Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun Daniel; Carlson, Thomas J.

    2012-01-01

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. In this paper, we provide a detailed description of a new software package, the Aquatic Acoustic Metrics Interface (AAMI), specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame. The features of the AAMI software are discussed, and several case studies are presented to illustrate its functionality. PMID:22969353

  7. An open real-time tele-stethoscopy system

    PubMed Central

    2012-01-01

    Background Acute respiratory infections are the leading cause of childhood mortality. The lack of physicians in rural areas of developing countries makes difficult their correct diagnosis and treatment. The staff of rural health facilities (health-care technicians) may not be qualified to distinguish respiratory diseases by auscultation. For this reason, the goal of this project is the development of a tele-stethoscopy system that allows a physician to receive real-time cardio-respiratory sounds from a remote auscultation, as well as video images showing where the technician is placing the stethoscope on the patient’s body. Methods A real-time wireless stethoscopy system was designed. The initial requirements were: 1) The system must send audio and video synchronously over IP networks, not requiring an Internet connection; 2) It must preserve the quality of cardiorespiratory sounds, allowing to adapt the binaural pieces and the chestpiece of standard stethoscopes, and; 3) Cardiorespiratory sounds should be recordable at both sides of the communication. In order to verify the diagnostic capacity of the system, a clinical validation with eight specialists has been designed. In a preliminary test, twelve patients have been auscultated by all the physicians using the tele-stethoscopy system, versus a local auscultation using traditional stethoscope. The system must allow listen the cardiac (systolic and diastolic murmurs, gallop sound, arrhythmias) and respiratory (rhonchi, rales and crepitations, wheeze, diminished and bronchial breath sounds, pleural friction rub) sounds. Results The design, development and initial validation of the real-time wireless tele-stethoscopy system are described in detail. The system was conceived from scratch as open-source, low-cost and designed in such a way that many universities and small local companies in developing countries may manufacture it. Only free open-source software has been used in order to minimize manufacturing costs and look for alliances to support its improvement and adaptation. The microcontroller firmware code, the computer software code and the PCB schematics are available for free download in a subversion repository hosted in SourceForge. Conclusions It has been shown that real-time tele-stethoscopy, together with a videoconference system that allows a remote specialist to oversee the auscultation, may be a very helpful tool in rural areas of developing countries. PMID:22917062

  8. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  9. Neuromimetic Sound Representation for Percept Detection and Manipulation

    NASA Astrophysics Data System (ADS)

    Zotkin, Dmitry N.; Chi, Taishih; Shamma, Shihab A.; Duraiswami, Ramani

    2005-12-01

    The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at [InlineEquation not available: see fulltext.]). Work on bringing the algorithms into the real-time processing domain is ongoing.

  10. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    PubMed

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  11. Red Sea Outflow Experiment (REDSOX): DLD2 RAFOS Float Data Report February 2001 - March 2003

    DTIC Science & Technology

    2005-01-01

    1 2. Description of the DLD2 Float and Dual-Release System ................................................................... 2 3. Sound Sources...processing are described in detail. 2. Description of the DLD2 Float and Dual-Release System The DLD2 is a second-generation RAFOS (Ranging And Fixing Of...Sound) float with several improvements over the traditional RAFOS float (see Rossby et al., 1986, for a complete description of the RAFOS system ). A

  12. Human-inspired sound environment recognition system for assistive vehicles

    NASA Astrophysics Data System (ADS)

    González Vidal, Eduardo; Fredes Zarricueta, Ernesto; Auat Cheein, Fernando

    2015-02-01

    Objective. The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. Approach. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. Main results. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. Significance. The proposed sound-based system is very efficient at providing general descriptions of the environment. Such descriptions are focused on vulnerable situations described by the ICF. The volunteers answered a questionnaire regarding the importance of constraining the vehicle velocities in risky environments, showing that all the volunteers felt comfortable with the system and its performance.

  13. Human-inspired sound environment recognition system for assistive vehicles.

    PubMed

    Vidal, Eduardo González; Zarricueta, Ernesto Fredes; Cheein, Fernando Auat

    2015-02-01

    The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. The proposed sound-based system is very efficient at providing general descriptions of the environment. Such descriptions are focused on vulnerable situations described by the ICF. The volunteers answered a questionnaire regarding the importance of constraining the vehicle velocities in risky environments, showing that all the volunteers felt comfortable with the system and its performance.

  14. Oak Ridge Reservation Public Warning Siren System Annual Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. F. Gee

    2000-10-01

    The full operational test of the Oak Ridge Reservation (ORR) Public Warning Siren System (PWSS) was successfully conducted on September 27, 2000. The annual test is a full-scale sounding of the individual siren systems around each of the three Department of Energy (DOE) sites in Oak Ridge, Tennessee. The purpose of the annual test is to demonstrate and validate the siren systems' ability to alert personnel outdoors in the Immediate Notification Zones (INZ) (approximately two miles) around each site. The success of this test is based on two critical functions of the siren system. The first function is system operability.more » The system is considered operable if 90% of the sirens are operational. System diagnostics and direct field observations were used to validate the operability of the siren systems. Based on the diagnostic results and field observations, greater than 90% of the sirens were considered operational. The second function is system audibility. The system is considered audible if the siren could be heard in the immediate notification zones around each of the three sites. Direct field observations, along with sound level measurements, were used to validate the audibility of the siren system. Based on the direct field observations and sound level measurements, the siren system was considered audible. The combination of field observations, system diagnostic status reports, and sound level measurements provided a high level of confidence that the system met and would meet operational requirements upon demand. As part of the overall system test, the Tennessee Emergency Management Agency (TEMA) activated the Emergency Alerting System (EAS), which utilized area radio stations to make announcements regarding the test and to remind residents of what to do in the event of an actual emergency.« less

  15. How learning to abstract shapes neural sound representations

    PubMed Central

    Ley, Anke; Vroomen, Jean; Formisano, Elia

    2014-01-01

    The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations. PMID:24917783

  16. Bronchial intubation could be detected by the visual stethoscope techniques in pediatric patients.

    PubMed

    Kimura, Tetsuro; Suzuki, Akira; Mimuro, Soichiro; Makino, Hiroshi; Sato, Shigehito

    2012-12-01

    We created a system that allows the visualization of breath sounds (visual stethoscope). We compared the visual stethoscope technique with auscultation for the detection of bronchial intubation in pediatric patients. In the auscultation group, an anesthesiologist advanced the tracheal tube, while another anesthesiologist auscultated bilateral breath sounds to detect the change and/or disappearance of unilateral breath sounds. In the visualization group, the stethoscope was used to detect changes in breath sounds and/or disappearance of unilateral breath sounds. The distance from the edge of the mouth to the carina was measured using a fiberoptic bronchoscope. Forty pediatric patients were enrolled in the study. At the point at which irregular breath sounds were auscultated, the tracheal tube was located at 0.5 ± 0.8 cm on the bronchial side from the carina. When a detectable change of shape of the visualized breath sound was observed, the tracheal tube was located 0.1 ± 1.2 cm on the bronchial side (not significant). At the point at which unilateral breath sounds were auscultated or a unilateral shape of the visualized breath sound was observed, the tracheal tube was 1.5 ± 0.8 or 1.2 ± 1.0 cm on the bronchial side, respectively (not significant). The visual stethoscope allowed to display the left and the right lung sound simultaneously and detected changes of breath sounds and unilateral breath sound as a tracheal tube was advanced. © 2012 Blackwell Publishing Ltd.

  17. Coupled Modeling of Hydrodynamics and Sound in Coastal Ocean for Renewable Ocean Energy Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Wen; Jung, Ki Won; Yang, Zhaoqing

    An underwater sound model was developed to simulate sound propagation from marine and hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite difference methods were developed to solve the 3D Helmholtz equation for sound propagation in the coastal environment. A 3D sparse matrix solver with complex coefficients was formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method was applied to solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model was then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generatedmore » by human activities, such as construction of OSW turbines or tidal stream turbine operations, in a range-dependent setting. As a proof of concept, initial validation of the solver is presented for two coastal wedge problems. This sound model can be useful for evaluating impacts on marine mammals due to deployment of MHK devices and OSW energy platforms.« less

  18. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  19. Active control of counter-rotating open rotor interior noise in a Dornier 728 experimental aircraft

    NASA Astrophysics Data System (ADS)

    Haase, Thomas; Unruh, Oliver; Algermissen, Stephan; Pohl, Martin

    2016-08-01

    The fuel consumption of future civil aircraft needs to be reduced because of the CO2 restrictions declared by the European Union. A consequent lightweight design and a new engine concept called counter-rotating open rotor are seen as key technologies in the attempt to reach this ambitious goals. Bearing in mind that counter-rotating open rotor engines emit very high sound pressures at low frequencies and that lightweight structures have a poor transmission loss in the lower frequency range, these key technologies raise new questions in regard to acoustic passenger comfort. One of the promising solutions for the reduction of sound pressure levels inside the aircraft cabin are active sound and vibration systems. So far, active concepts have rarely been investigated for a counter-rotating open rotor pressure excitation on complex airframe structures. Hence, the state of the art is augmented by the preliminary study presented in this paper. The study shows how an active vibration control system can influence the sound transmission of counter-rotating open rotor noise through a complex airframe structure into the cabin. Furthermore, open questions on the way towards the realisation of an active control system are addressed. In this phase, an active feedforward control system is investigated in a fully equipped Dornier 728 experimental prototype aircraft. In particular, the sound transmission through the airframe, the coupling of classical actuators (inertial and piezoelectric patch actuators) into the structure and the performance of the active vibration control system with different error sensors are investigated. It can be shown that the active control system achieves a reduction up to 5 dB at several counter-rotating open rotor frequencies but also that a better performance could be achieved through further optimisations.

  20. Active Noise Control Experiments using Sound Energy Flu

    NASA Astrophysics Data System (ADS)

    Krause, Uli

    2015-03-01

    This paper reports on the latest results concerning the active noise control approach using net flow of acoustic energy. The test set-up consists of two loudspeakers simulating the engine noise and two smaller loudspeakers which belong to the active noise system. The system is completed by two acceleration sensors and one microphone per loudspeaker. The microphones are located in the near sound field of the loudspeakers. The control algorithm including the update equation of the feed-forward controller is introduced. Numerical simulations are performed with a comparison to a state of the art method minimising the radiated sound power. The proposed approach is experimentally validated.

  1. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  2. Auditory perception modulated by word reading.

    PubMed

    Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja

    2016-10-01

    Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.

  3. Status of bottomland forests in the Albemarle Sound of North Carolina and Virginia, 1984-2012

    Treesearch

    Jean H. Lorber; Anita K. Rose

    2015-01-01

    The Albemarle Sound, a 6-million-acre watershed, contains some of the largest areas of bottomland hardwood habitat in the Eastern United States. Using close to 30 years of data from the Forest Inventory and Analysis Program, a study of the current status and trends in the Albemarle Sound’s bottomland forest system was conducted. In 2012, bottomlands totaled...

  4. Guidelines for the Sound Insulation of Residences Exposed to Aircraft Operations

    DTIC Science & Technology

    1992-10-01

    scale the incident sound. The values of sound discriminates against the lower frequencies absorption coefficients usually range from below 1000 hertz...Regulations achieve them, must take into account the establishing a single system of noise measure- sometimes conflicting needs of the parties ment...fasteners to the studs to prevent sagging. 5. Cut new gypsumboard so that it fits tightly against walls, floor, and ceiling. 6. Apply acoustical

  5. A system to simulate and reproduce audio-visual environments for spatial hearing research.

    PubMed

    Seeber, Bernhard U; Kerber, Stefan; Hafter, Ervin R

    2010-02-01

    The article reports the experience gained from two implementations of the "Simulated Open-Field Environment" (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a "Swiss army knife" tool for auditory, spatial hearing and audio-visual research. Crown Copyright 2009. Published by Elsevier B.V. All rights reserved.

  6. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    PubMed

    Villarreal, Eduardo A Garza; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  7. A System to Simulate and Reproduce Audio-Visual Environments for Spatial Hearing Research

    PubMed Central

    Seeber, Bernhard U.; Kerber, Stefan; Hafter, Ervin R.

    2009-01-01

    The article reports the experience gained from two implementations of the “Simulated Open-Field Environment” (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a “Swiss army knife” tool for auditory, spatial hearing and audio-visual research. PMID:19909802

  8. The NASA Sounding Rocket Program and space sciences.

    PubMed

    Gurkin, L W

    1992-10-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  9. The NASA Sounding Rocket Program and space sciences

    NASA Technical Reports Server (NTRS)

    Gurkin, L. W.

    1992-01-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  10. Robust Feedback Control of Flow Induced Structural Radiation of Sound

    NASA Technical Reports Server (NTRS)

    Heatwole, Craig M.; Bernhard, Robert J.; Franchek, Matthew A.

    1997-01-01

    A significant component of the interior noise of aircraft and automobiles is a result of turbulent boundary layer excitation of the vehicular structure. In this work, active robust feedback control of the noise due to this non-predictable excitation is investigated. Both an analytical model and experimental investigations are used to determine the characteristics of the flow induced structural sound radiation problem. The problem is shown to be broadband in nature with large system uncertainties associated with the various operating conditions. Furthermore the delay associated with sound propagation is shown to restrict the use of microphone feedback. The state of the art control methodologies, IL synthesis and adaptive feedback control, are evaluated and shown to have limited success for solving this problem. A robust frequency domain controller design methodology is developed for the problem of sound radiated from turbulent flow driven plates. The control design methodology uses frequency domain sequential loop shaping techniques. System uncertainty, sound pressure level reduction performance, and actuator constraints are included in the design process. Using this design method, phase lag was added using non-minimum phase zeros such that the beneficial plant dynamics could be used. This general control approach has application to lightly damped vibration and sound radiation problems where there are high bandwidth control objectives requiring a low controller DC gain and controller order.

  11. The Development of a Finite Volume Method for Modeling Sound in Coastal Ocean Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Wen; Yang, Zhaoqing; Copping, Andrea E.

    : As the rapid growth of marine renewable energy and off-shore wind energy, there have been concerns that the noises generated from construction and operation of the devices may interfere marine animals’ communication. In this research, a underwater sound model is developed to simulate sound prorogation generated by marine-hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite volume and finite difference methods are developed to solve the 3D Helmholtz equation of sound propagation in the coastal environment. For finite volume method, the grid system consists of triangular grids in horizontal plane and sigma-layers in vertical dimension. A 3Dmore » sparse matrix solver with complex coefficients is formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method is applied to efficiently solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model is then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generated by human activities in a range-dependent setting, such as offshore wind energy platform constructions and tidal stream turbines. As a proof of concept, initial validation of the finite difference solver is presented for two coastal wedge problems. Validation of finite volume method will be reported separately.« less

  12. Statistical signal processing technique for identification of different infected sites of the diseased lungs.

    PubMed

    Abbas, Ali

    2012-06-01

    Accurate Diagnosis of lung disease depends on understanding the sounds emanating from lung and its location. Lung sounds are of significance as they supply precise and important information on the health of the respiratory system. In addition, correct interpretation of breath sounds depends on a systematic approach to auscultation; it also requires the ability to describe the location of abnormal finding in relation to bony structures and anatomic landmark lines. Lungs consist of number of lobes; each lung lobe is further subdivided into smaller segments. These segments are attached to each other. Knowledge of the position of the lung segments is useful and important during the auscultation and diagnosis of the lung diseases. Usually the medical doctors give the location of the infection a segmental position reference. Breath sounds are auscultated over the anterior chest wall surface, the lateral chest wall surfaces, and posterior chest wall surface. Adventitious sounds from different location can be detected. It is common to seek confirmation of the sound detection and its location using invasive and potentially harmful imaging diagnosis techniques like x-rays. To overcome this limitation and for fast, reliable, accurate, and inexpensive diagnose a technique is developed in this research for identifying the location of infection through a computerized auscultation system.

  13. Programmed Approach vs. Conventional Approach Using highly Consistent Sound-Symbol System of Reading in Three Primary Grades.

    ERIC Educational Resources Information Center

    Shore, Robert Eugene

    The effects of two primary reading programs using a programed format (with and without audio-supplement) and a conventional format (the program format deprogramed) in a highly consistent sound-symbol system of reading at three primary grade levels were compared, using a pretest, post-test control group design. The degree of suitability of…

  14. [Computer-aided Diagnosis and New Electronic Stethoscope].

    PubMed

    Huang, Mei; Liu, Hongying; Pi, Xitian; Ao, Yilu; Wang, Zi

    2017-05-30

    Auscultation is an important method in early-diagnosis of cardiovascular disease and respiratory system disease. This paper presents a computer-aided diagnosis of new electronic auscultation system. It has developed an electronic stethoscope based on condenser microphone and the relevant intelligent analysis software. It has implemented many functions that combined with Bluetooth, OLED, SD card storage technologies, such as real-time heart and lung sounds auscultation in three modes, recording and playback, auscultation volume control, wireless transmission. The intelligent analysis software based on PC computer utilizes C# programming language and adopts SQL Server as the background database. It has realized play and waveform display of the auscultation sound. By calculating the heart rate, extracting the characteristic parameters of T1, T2, T12, T11, it can analyze whether the heart sound is normal, and then generate diagnosis report. Finally the auscultation sound and diagnosis report can be sent to mailbox of other doctors, which can carry out remote diagnosis. The whole system has features of fully function, high portability, good user experience, and it is beneficial to promote the use of electronic stethoscope in the hospital, at the same time, the system can also be applied to auscultate teaching and other occasions.

  15. GPS Sounding Rocket Developments

    NASA Technical Reports Server (NTRS)

    Bull, Barton

    1999-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data including; chemical makeup and physical processes taking place in the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft prior to their use in more expensive activities. This paper addresses the NASA Wallops Island history of GPS Sounding Rocket experience since 1994 and the development of highly accurate and useful system.

  16. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  17. A noise assessment and prediction system

    NASA Technical Reports Server (NTRS)

    Olsen, Robert O.; Noble, John M.

    1990-01-01

    A system has been designed to provide an assessment of noise levels that result from testing activities at Aberdeen Proving Ground, Md. The system receives meteorological data from surface stations and an upper air sounding system. The data from these systems are sent to a meteorological model, which provides forecasting conditions for up to three hours from the test time. The meteorological data are then used as input into an acoustic ray trace model which projects sound level contours onto a two-dimensional display of the surrounding area. This information is sent to the meteorological office for verification, as well as the range control office, and the environmental office. To evaluate the noise level predictions, a series of microphones are located off the reservation to receive the sound and transmit this information back to the central display unit. The computer models are modular allowing for a variety of models to be utilized and tested to achieve the best agreement with data. This technique of prediction and model validation will be used to improve the noise assessment system.

  18. Application of a Musical Whistling Certificate Examination System as a Group Examination

    NASA Astrophysics Data System (ADS)

    Mori, Mikio; Ogihara, Mitsuhiro; Sugahara, Shin-Ichi; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro

    Recently, some professional whistlers have set up music schools to teach musical whistling. However, so far, there is no licensed examination for musical whistling. In this paper, we propose an examination system for evaluating musical whistling. The system conducts an examination in musical whistling on a personal computer (PC). It can be used to award four grades, from the second to the fifth. These grades are designed according to the standards adopted by the school for musical whistling established by the Japanese professional whistler Moku-San. It is expected that the group examination of this examination is held in the examination center where other general certification examinations are held. Thus, the influence of the whistle sound on the PC microphone normally used should be considered. For this purpose, we examined the feasibility of using a bone-conductive microphone for a musical whistling certificate examination system. This paper shows that the proposed system in which bone-transmitted sounds are considered gives good performance under a noisy environment, as demonstrated in a group examination of musical whistling using bone-transmitted sounds. The timing of a candidates whistling tends to not match because the applause sound output from the PC was inaudible for a person older than 60 years.

  19. Design and qualification of an UHV system for operation on sounding rockets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grosse, Jens, E-mail: jens.grosse@dlr.de; Braxmaier, Claus; Seidel, Stephan Tobias

    The sounding rocket mission MAIUS-1 has the objective to create the first Bose–Einstein condensate in space; therefore, its scientific payload is a complete cold atom experiment built to be launched on a VSB-30 sounding rocket. An essential part of the setup is an ultrahigh vacuum system needed in order to sufficiently suppress interactions of the cooled atoms with the residual background gas. Contrary to vacuum systems on missions aboard satellites or the international space station, the required vacuum environment has to be reached within 47 s after motor burn-out. This paper contains a detailed description of the MAIUS-1 vacuum system, asmore » well as a description of its qualification process for the operation under vibrational loads of up to 8.1 g{sub RMS} (where RMS is root mean square). Even though a pressure rise dependent on the level of vibration was observed, the design presented herein is capable of regaining a pressure of below 5 × 10{sup −10} mbar in less than 40 s when tested at 5.4 g{sub RMS}. To the authors' best knowledge, it is the first UHV system qualified for operation on a sounding rocket.« less

  20. Exterior sound level measurements of over-snow vehicles at Yellowstone National Park.

    DOT National Transportation Integrated Search

    2008-09-30

    Sounds associated with oversnow vehicles, such as snowmobiles and snowcoaches, are an : important management concern at Yellowstone and Grand Teton National Parks. The John A. : Volpe National Transportation Systems Centers Environmental Measureme...

  1. 77 FR 19413 - Petition for Waiver of Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ...-0023. UP seeks to use an automated sound measurement system (ASMS) to test locomotive horns as required in 49 CFR 229.129(b). The ASMS uses a Class 1 sound-level measuring instrument that is permanently...

  2. Hazardous Material Transportation Risks in the Puget Sound Region

    DOT National Transportation Integrated Search

    1981-09-01

    In order to contribute to workable hazardous materials accident prevention and response systems, public safety risks of transporting hazardous materials in the Central Puget Sound Region of Washington State are determined. Risk spectrums are obtained...

  3. Cardiovascular Sound and the Stethoscope, 1816 to 2016

    PubMed Central

    Segall, Harold N.

    1963-01-01

    Cardiovascular sound escaped attention until Laennec invented and demonstrated the usefulness of the stethoscope. Accuracy of diagnosis using cardiovascular sounds as clues increased with improvement in knowledge of the physiology of circulation. Nearly all currently acceptable clinicopathological correlations were established by physicians who used the simplest of stethoscopes or listened with the bare ear. Certain refinements followed the use of modern methods which afford greater precision in timing cardiovascular sounds. These methods contribute to educating the human ear, so that those advantages may be applied which accrue from auscultation, plus the method of writing quantitative symbols to describe what is heard, by focusing the sense of hearing on each segment of the cardiac cycle in turn. By the year 2016, electronic systems of collecting and analyzing data about the cardiovascular system may render the stethoscope obsolete. ImagesFig. 1Fig. 2Fig. 3Fig. 5Fig. 8 PMID:13987676

  4. Eocene evolution of whale hearing.

    PubMed

    Nummela, Sirpa; Thewissen, J G M; Bajpai, Sunil; Hussain, S Taseer; Kumar, Kishor

    2004-08-12

    The origin of whales (order Cetacea) is one of the best-documented examples of macroevolutionary change in vertebrates. As the earliest whales became obligately marine, all of their organ systems adapted to the new environment. The fossil record indicates that this evolutionary transition took less than 15 million years, and that different organ systems followed different evolutionary trajectories. Here we document the evolutionary changes that took place in the sound transmission mechanism of the outer and middle ear in early whales. Sound transmission mechanisms change early on in whale evolution and pass through a stage (in pakicetids) in which hearing in both air and water is unsophisticated. This intermediate stage is soon abandoned and is replaced (in remingtonocetids and protocetids) by a sound transmission mechanism similar to that in modern toothed whales. The mechanism of these fossil whales lacks sophistication, and still retains some of the key elements that land mammals use to hear airborne sound.

  5. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    PubMed

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from sound-level distributions with different modes (15 vs 45 dB). Auditory cortex neurons adapted to sound-level statistics in younger and older adults, but adaptation was incomplete in older people. The data suggest that the aging auditory system does not fully capitalize on the statistics available in sound environments to tune the perceptual system dynamically. Copyright © 2018 the authors 0270-6474/18/381989-11$15.00/0.

  6. The Last Seat in the House: The Story of Hanley Sound

    NASA Astrophysics Data System (ADS)

    Kane, John

    Prior to the rush of live outdoor sound during the 1950s, a young, audio-savvy Bill Hanley recognized certain inadequacies within the widely used public address system marketplace. Hanley's techniques allowed him to construct systems of sound that changed what the audience heard during outdoor events. Through my research, I reveal a new insight into how Hanley and those who worked at his business (Hanley Sound) had a direct, innovative influence on specific sound applications, which are now widely used and often taken for granted. Hanley's innovations shifted an existing public address, oral-based sound industry into a new area of technology rich with clarity and intelligibility. What makes his story so unique is that, because his relationship with sound was so intimate, it superseded his immediate economic, safety, and political concerns. He acted selflessly and with extreme focus. As Hanley's reputation grew, so did audience and performer demand for clear, audible concert sound. Over time, he would provide sound for some of the largest antiwar peace rallies and concerts in American history. Hanley worked in the thickness of extreme civil unrest, not typical for the average soundman of the day. Conveniently, Hanley's passion for clarity in sound also happened to occur when popular music transitioned into an important conveyor of political message through festivals. Since May 2011 I have been exploring the life of Bill Hanley, an innovative leader in sound. I use interdisciplinary approaches to uncover cultural, historical, social, political, and psychological occasions in Hanley's life that were imperative to his ongoing development. Filmed action sequences, such as talking head interviews (friends, family members, and professional colleagues) and historical archival 8 mm footage, and family photos and music ephemera, help uncover this qualitative ethnographic analysis of not only Bill's development but also the world around him. Reflective, intimate interviews with Hanley reveal his charismatic, innovative leadership style, which had a direct influence on those who worked for and around him. Finally, his story contains additional conflicts that I felt obligated to address. For one, the lack of financial reward that Hanley Sound faced is intriguing to me. The recognition of being one of the true pioneers in the business and not reaping the financial benefits of such efforts needed to be examined. As the industry he influenced grew around him, those who borrowed his ideas have moved forward, creating the infrastructure of contemporary sound reinforcement as we know it today. Hanley's pioneering efforts not only created the foundation of this unknown industry but also gave true definition to the term sound engineer.

  7. Clinical Validation of a Sound Processor Upgrade in Direct Acoustic Cochlear Implant Subjects

    PubMed Central

    Kludt, Eugen; D’hondt, Christiane; Lenarz, Thomas; Maier, Hannes

    2017-01-01

    Objective: The objectives of the investigation were to evaluate the effect of a sound processor upgrade on the speech reception threshold in noise and to collect long-term safety and efficacy data after 2½ to 5 years of device use of direct acoustic cochlear implant (DACI) recipients. Study Design: The study was designed as a mono-centric, prospective clinical trial. Setting: Tertiary referral center. Patients: Fifteen patients implanted with a direct acoustic cochlear implant. Intervention: Upgrade with a newer generation of sound processor. Main Outcome Measures: Speech recognition test in quiet and in noise, pure tone thresholds, subject-reported outcome measures. Results: The speech recognition in quiet and in noise is superior after the sound processor upgrade and stable after long-term use of the direct acoustic cochlear implant. The bone conduction thresholds did not decrease significantly after long-term high level stimulation. Conclusions: The new sound processor for the DACI system provides significant benefits for DACI users for speech recognition in both quiet and noise. Especially the noise program with the use of directional microphones (Zoom) allows DACI patients to have much less difficulty when having conversations in noisy environments. Furthermore, the study confirms that the benefits of the sound processor upgrade are available to the DACI recipients even after several years of experience with a legacy sound processor. Finally, our study demonstrates that the DACI system is a safe and effective long-term therapy. PMID:28406848

  8. Sensor system for heart sound biomonitor

    NASA Astrophysics Data System (ADS)

    Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek

    1999-09-01

    Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.

  9. An investigation of the usability of sound recognition for source separation of packaging wastes in reverse vending machines.

    PubMed

    Korucu, M Kemal; Kaplan, Özgür; Büyük, Osman; Güllü, M Kemal

    2016-10-01

    In this study, we investigate the usability of sound recognition for source separation of packaging wastes in reverse vending machines (RVMs). For this purpose, an experimental setup equipped with a sound recording mechanism was prepared. Packaging waste sounds generated by three physical impacts such as free falling, pneumatic hitting and hydraulic crushing were separately recorded using two different microphones. To classify the waste types and sizes based on sound features of the wastes, a support vector machine (SVM) and a hidden Markov model (HMM) based sound classification systems were developed. In the basic experimental setup in which only free falling impact type was considered, SVM and HMM systems provided 100% classification accuracy for both microphones. In the expanded experimental setup which includes all three impact types, material type classification accuracies were 96.5% for dynamic microphone and 97.7% for condenser microphone. When both the material type and the size of the wastes were classified, the accuracy was 88.6% for the microphones. The modeling studies indicated that hydraulic crushing impact type recordings were very noisy for an effective sound recognition application. In the detailed analysis of the recognition errors, it was observed that most of the errors occurred in the hitting impact type. According to the experimental results, it can be said that the proposed novel approach for the separation of packaging wastes could provide a high classification performance for RVMs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Evaluating the performance of active noise control systems in commercial and industrial applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Depies, C.; Deneen, S.; Lowe, M.

    1995-06-01

    Active sound cancellation technology is increasingly being used to quiet commercial and industrial air-moving devices. Engineers and designers are implementing active or combination active/passive technology to control sound quality in the workplace and the acoustical environment in residential areas near industrial facilities. Sound level measurements made before and after the installation of active systems have proved that significant improvements in sound quality can be obtained even if there is little or no change in the NC/RC or dBA numbers. Noise produced by centrifugal and vane-axial fans, pumps and blowers, commonly used for ventilation and material movement in industry, are frequentlymore » dominated by high amplitude, tonal noise at low frequencies. And the low-frequency noise produced by commercial air handlers often has less tonal and more broadband characteristics, resulting in audible duct rumble noise and objectionable room spectrums. Because the A-weighting network, which is commonly used for industrial noise measurements, de-emphasizes low frequencies, its single number rating can be misleading in terms of judging the overall subjective sound quality in impacted areas and assessing the effectiveness of noise control measures. Similarly, NC values, traditionally used for commercial HVAC acoustical design criteria, can be governed by noise at any frequency and cannot accurately depict human judgment of the aural comfort level. Analyses of frequency spectrum characteristics provide the most effective means of assessing sound quality and determining mitigative measures for achieving suitable background sound levels.« less

  11. Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions.

    PubMed

    van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean

    2017-04-15

    A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Impacts of distinct observations during the 2009 Prince William Sound field experiment: A data assimilation study

    NASA Astrophysics Data System (ADS)

    Li, Z.; Chao, Y.; Farrara, J.; McWilliams, J. C.

    2012-12-01

    A set of data assimilation experiments, known as Observing System Experiments (OSEs), are performed to assess the relative impacts of different types of observations acquired during the 2009 Prince William Sound Field Experiment. The observations assimilated consist primarily of three types: High Frequency (HF) radar surface velocities, vertical profiles of temperature/salinity (T/S) measured by ships, moorings, Autonomous Underwater Vehicles and gliders, and satellite sea surface temperatures (SSTs). The impact of all the observations, HF radar surface velocities, and T/S profiles is assessed. Without data assimilation, a frequently occurring cyclonic eddy in the central Sound is overly persistent and intense. The assimilation of the HF radar velocities effectively reduces these biases and improves the representation of the velocities as well as the T/S fields in the Sound. The assimilation of the T/S profiles improves the large scale representation of the temperature/salinity and also the velocity field in the central Sound. The combination of the HF radar surface velocities and sparse T/S profiles results in an observing system capable of representing the circulation in the Sound reliably and thus producing analyses and forecasts with useful skill. It is suggested that a potentially promising observing network could be based on satellite SSHs and SSTs along with sparse T/S profiles, and future satellite SSHs with wide swath coverage and higher resolution may offer excellent data that will be of great use for predicting the circulation in the Sound.

  13. Sensing of Particular Speakers for the Construction of Voice Interface Utilized in Noisy Environment

    NASA Astrophysics Data System (ADS)

    Sawada, Hideyuki; Ohkado, Minoru

    Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.

  14. Plane-wave decomposition by spherical-convolution microphone array

    NASA Astrophysics Data System (ADS)

    Rafaely, Boaz; Park, Munhum

    2004-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  15. Sound reduction by metamaterial-based acoustic enclosure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Shanshan; Li, Pei; Zhou, Xiaoming

    In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of themore » source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.« less

  16. Apparatus and method for processing Korotkov sounds. [for blood pressure measurement

    NASA Technical Reports Server (NTRS)

    Golden, D. P., Jr.; Hoffler, G. W.; Wolthuis, R. A. (Inventor)

    1974-01-01

    A Korotkov sound processor, used in a noninvasive automatic blood measuring system where the brachial artery is occluded by an inflatable cuff, is disclosed. The Korotkoff sound associated with the systolic event is determined when the ratio of the absolute value of a voltage signal, representing Korotkov sounds in the range of 18 to 26 Hz to a maximum absolute peak value of the unfiltered signals, first equals or exceeds a value of 0.45. Korotkov sound associated with the diastolic event is determined when a ratio of the voltage signal of the Korotkov sounds in the range of 40 to 60 Hz to the absolute peak value of such signals within a single measurement cycle first falls below a value of 0.17. The processor signals the occurrence of the systolic and diastolic events and these signals can be used to control a recorder to record pressure values for these events.

  17. Physiological phenotyping of dementias using emotional sounds.

    PubMed

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-06-01

    Emotional behavioral disturbances are hallmarks of many dementias but their pathophysiology is poorly understood. Here we addressed this issue using the paradigm of emotionally salient sounds. Pupil responses and affective valence ratings for nonverbal sounds of varying emotional salience were assessed in patients with behavioral variant frontotemporal dementia (bvFTD) (n = 14), semantic dementia (SD) (n = 10), progressive nonfluent aphasia (PNFA) (n = 12), and AD (n = 10) versus healthy age-matched individuals (n = 26). Referenced to healthy individuals, overall autonomic reactivity to sound was normal in Alzheimer's disease (AD) but reduced in other syndromes. Patients with bvFTD, SD, and AD showed altered coupling between pupillary and affective behavioral responses to emotionally salient sounds. Emotional sounds are a useful model system for analyzing how dementias affect the processing of salient environmental signals, with implications for defining pathophysiological mechanisms and novel biomarker development.

  18. Effects of environmental sounds on the guessability of animated graphic symbols.

    PubMed

    Harmon, Ashley C; Schlosser, Ralf W; Gygi, Brian; Shane, Howard C; Kong, Ying-Yee; Book, Lorraine; Macduff, Kelly; Hearn, Emilia

    2014-12-01

    Graphic symbols are a necessity for pre-literate children who use aided augmentative and alternative communication (AAC) systems (including non-electronic communication boards and speech generating devices), as well as for mobile technologies using AAC applications. Recently, developers of the Autism Language Program (ALP) Animated Graphics Set have added environmental sounds to animated symbols representing verbs in an attempt to enhance their iconicity. The purpose of this study was to examine the effects of environmental sounds (added to animated graphic symbols representing verbs) in terms of naming. Participants included 46 children with typical development between the ages of 3;0 to 3;11 (years;months). The participants were randomly allocated to a condition of symbols with environmental sounds or a condition without environmental sounds. Results indicated that environmental sounds significantly enhanced the naming accuracy of animated symbols for verbs. Implications in terms of symbol selection, symbol refinement, and future symbol development will be discussed.

  19. Wuhan Ionospheric Oblique Backscattering Sounding System and Its Applications—A Review

    PubMed Central

    Shi, Shuzhu; Yang, Guobin; Jiang, Chunhua; Zhang, Yuannong; Zhao, Zhengyu

    2017-01-01

    For decades, high-frequency (HF) radar has played an important role in sensing the Earth’s environment. Advances in radar technology are providing opportunities to significantly improve the performance of HF radar, and to introduce more applications. This paper presents a low-power, small-size, and multifunctional HF radar developed by the Ionospheric Laboratory of Wuhan University, referred to as the Wuhan Ionospheric Oblique Backscattering Sounding System (WIOBSS). Progress in the development of this radar is described in detail, including the basic principles of operation, the system configuration, the sounding waveforms, and the signal and data processing methods. Furthermore, its various remote sensing applications are briefly reviewed to show the good performance of this radar. Finally, some suggested solutions are given for further improvement of its performance. PMID:28629157

  20. Sound transmission through triple-panel structures lined with poroelastic materials

    NASA Astrophysics Data System (ADS)

    Liu, Yu

    2015-03-01

    In this paper, previous theories on the prediction of sound transmission loss for a double-panel structure lined with poroelastic materials are extended to address the problem of a triple-panel structure. Six typical configurations are considered for a triple-panel structure based on the method of coupling the porous layers to the facing panels which determines critically the sound insulation performance of the system. The transfer matrix method is employed to solve the system by applying appropriate types of boundary conditions for these configurations. The transmission loss of the triple-panel structures in a diffuse sound field is calculated as a function of frequency and compared with that of corresponding double-panel structures. Generally, the triple-panel structure with poroelastic linings has superior acoustic performance to the double-panel counterpart, remarkably in the mid-high frequency range and possibly at low frequencies, by selecting appropriate configurations in which those with two air gaps in the structure exhibit the best overall performance over the entire frequency range. The poroelastic lining significantly lowers the cut-on frequency above which the triple-panel structure exhibits noticeably higher transmission loss. Compared with a double-panel structure, the wider range of system parameters for a triple-panel structure due to the additional partition provides more design space for tuning the sound insulation performance. Despite the increased structural complexity, the triple-panel structure lined with poroelastic materials has the obvious advantages in sound transmission loss while without the penalties in weight and volume, and is hence a promising replacement for the widely used double-panel sandwich structure.

  1. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  2. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A): Calibration management plan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is the Calibration Management Plan for the Earth Observing System/Advanced Microwave Sounding Unit-A (AMSU-A). The plan defines calibration requirements, calibration equipment, and calibration methods for the AMSU-A, a 15 channel passive microwave radiometer that will be used for measuring global atmospheric temperature profiles from the EOS polar orbiting observatory. The AMSU-A system will also provide data to verify and augment that of the Atmospheric Infrared Sounder.

  3. Acoustic-tactile rendering of visual information

    NASA Astrophysics Data System (ADS)

    Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.

    2012-03-01

    In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.

  4. An alternative respiratory sounds classification system utilizing artificial neural networks.

    PubMed

    Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen

    2015-01-01

    Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  5. Full Spatial Resolution Infrared Sounding Application in the Preconvection Environment

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, G.; Lin, T.

    2013-12-01

    Advanced infrared (IR) sounders such as the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) provide atmospheric temperature and moisture profiles with high vertical resolution and high accuracy in preconvection environments. The derived atmospheric stability indices such as convective available potential energy (CAPE) and lifted index (LI) from advanced IR soundings can provide critical information 1 ; 6 h before the development of severe convective storms. Three convective storms are selected for the evaluation of applying AIRS full spatial resolution soundings and the derived products on providing warning information in the preconvection environments. In the first case, the AIRS full spatial resolution soundings revealed local extremely high atmospheric instability 3 h ahead of the convection on the leading edge of a frontal system, while the second case demonstrates that the extremely high atmospheric instability is associated with the local development of severe thunderstorm in the following hours. The third case is a local severe storm that occurred on 7-8 August 2010 in Zhou Qu, China, which caused more than 1400 deaths and left another 300 or more people missing. The AIRS full spatial resolution LI product shows the atmospheric instability 3.5 h before the storm genesis. The CAPE and LI from AIRS full spatial resolution and operational AIRS/AMSU soundings along with Geostationary Operational Environmental Satellite (GOES) Sounder derived product image (DPI) products were analyzed and compared. Case studies show that full spatial resolution AIRS retrievals provide more useful warning information in the preconvection environments for determining favorable locations for convective initiation (CI) than do the coarser spatial resolution operational soundings and lower spectral resolution GOES Sounder retrievals. The retrieved soundings are also tested in a regional data assimilation WRF 3D-var system to evaluate the potential assist in the NWP model.

  6. Effects of Aircraft Noise and Sonic Booms on Domestic Animals and Wildlife: A Literature Synthesis

    DTIC Science & Technology

    1988-06-01

    the digestive response of yearling wethers to the same sound types and intensities used in the two studies above: white noise and music presen’ed...metabolizable energy of the ration and improved the apparent nutrient digestibilities . Sound intensity did not affect apparent digest - ibility coefficients. The...high digestibility coefficients for lambs exposed to intermittent sounds suggests that those types of auditory stimuli influenced the digestive system

  7. Inverse problem of radiofrequency sounding of ionosphere

    NASA Astrophysics Data System (ADS)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  8. IFLA General Conference, 1984. Management and Technology Division. Section on Information Technology and Joint Meeting of the Round Table Audiovisual Media, the International Association for Sound Archives, and the International Association for Music Libraries. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations, The Hague (Netherlands).

    Six papers on information technology, the development of information systems for Third World countries, handling of sound recordings, and library automation were presented at the 1984 IFLA conference. They include: (1) "Handling, Storage and Preservation of Sound Recordings under Tropical and Subtropical Climatic Conditions" (Dietrich…

  9. Challenges to the successful implementation of 3-D sound

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    1991-11-01

    The major challenges for the successful implementation of 3-D audio systems involve minimizing reversals, intracranially heard sound, and localization error for listeners. Designers of 3-D audio systems are faced with additional challenges in data reduction and low-frequency response characteristics. The relationship of the head-related transfer function (HRTF) to these challenges is shown, along with some preliminary psychoacoustic results gathered at NASA-Ames.

  10. True-Triaxial Experimental Study of the Evolutionary Features of the Acoustic Emissions and Sounds of Rockburst Processes

    NASA Astrophysics Data System (ADS)

    Su, Guoshao; Shi, Yanjiong; Feng, Xiating; Jiang, Jianqing; Zhang, Jie; Jiang, Quan

    2018-02-01

    Rockbursts are markedly characterized by the ejection of rock fragments from host rocks at certain speeds. The rockburst process is always accompanied by acoustic signals that include acoustic emissions (AE) and sounds. A deep insight into the evolutionary features of AE and sound signals is important to improve the accuracy of rockburst prediction. To investigate the evolutionary features of AE and sound signals, rockburst tests on granite rock specimens under true-triaxial loading conditions were performed using an improved rockburst testing system, and the AE and sounds during rockburst development were recorded and analyzed. The results show that the evolutionary features of the AE and sound signals were obvious and similar. On the eve of a rockburst, a `quiescent period' could be observed in both the evolutionary process of the AE hits and the sound waveform. Furthermore, the time-dependent fractal dimensions of the AE hits and sound amplitude both showed a tendency to continuously decrease on the eve of the rockbursts. In addition, on the eve of the rockbursts, the main frequency of the AE and sound signals both showed decreasing trends, and the frequency spectrum distributions were both characterized by low amplitudes, wide frequency bands and multiple peak shapes. Thus, the evolutionary features of sound signals on the eve of rockbursts, as well as that of AE signals, can be used as beneficial information for rockburst prediction.

  11. A New Mechanism of Sound Generation in Songbirds

    NASA Astrophysics Data System (ADS)

    Goller, Franz; Larsen, Ole N.

    1997-12-01

    Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.

  12. Optimization of Sound Absorbers Number and Placement in an Enclosed Room by Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Lau, S. F.; Zainulabidin, M. H.; Yahya, M. N.; Zaman, I.; Azmir, N. A.; Madlan, M. A.; Ismon, M.; Kasron, M. Z.; Ismail, A. E.

    2017-10-01

    Giving a room proper acoustic treatment is both art and science. Acoustic design brings comfort in the built environment and reduces noise level by using sound absorbers. There is a need to give a room acoustic treatment by installing absorbers in order to decrease the reverberant sound. However, they are usually high in price which cost much for installation and there is no system to locate the optimum number and placement of sound absorbers. It would be a waste if the room is overly treated with absorbers or cause improper treatment if the room is treated with insufficient absorbers. This study aims to determine the amount of sound absorbers needed and optimum location of sound absorbers placement in order to reduce the overall sound pressure level in specified room by using ANSYS APDL software. The size of sound absorbers needed is found to be 11 m 2 by using Sabine equation and different unit sets of absorbers are applied on walls, each with the same total areas to investigate the best configurations. All three sets (single absorber, 11 absorbers and 44 absorbers) has successfully treating the room by reducing the overall sound pressure level. The greatest reduction in overall sound pressure level is that of 44 absorbers evenly distributed around the walls, which has reduced as much as 24.2 dB and the least effective configuration is single absorber whereby it has reduced the overall sound pressure level by 18.4 dB.

  13. NASA Glenn's Acoustical Testing Laboratory Awarded Accreditation by the National Voluntary Laboratory Accreditation Program

    NASA Technical Reports Server (NTRS)

    Akers, James C.; Cooper, Beth A.

    2004-01-01

    NASA Glenn Research Center's Acoustical Testing Laboratory (ATL) provides a comprehensive array of acoustical testing services, including sound pressure level, sound intensity level, and sound-power-level testing per International Standards Organization (ISO)1 3744. Since its establishment in September 2000, the ATL has provided acoustic emission testing and noise control services for a variety of customers, particularly microgravity space flight hardware that must meet International Space Station acoustic emission requirements. The ATL consists of a 23- by 27- by 20-ft (height) convertible hemi/anechoic test chamber and a separate sound-attenuating test support enclosure. The ATL employs a personal-computer-based data acquisition system that provides up to 26 channels of simultaneous data acquisition with real-time analysis (ref. 4). Specialized diagnostic tools, including a scanning sound-intensity system, allow the ATL's technical staff to support its clients' aggressive low-noise design efforts to meet the space station's acoustic emission requirement. From its inception, the ATL has pursued the goal of developing a comprehensive ISO 17025-compliant quality program that would incorporate Glenn's existing ISO 9000 quality system policies as well as ATL-specific technical policies and procedures. In March 2003, the ATL quality program was awarded accreditation by the National Voluntary Laboratory Accreditation Program (NVLAP) for sound-power-level testing in accordance with ISO 3744. The NVLAP program is administered by the National Institutes of Standards and Technology (NIST) of the U.S. Department of Commerce and provides third-party accreditation for testing and calibration laboratories. There are currently 24 NVLAP-accredited acoustical testing laboratories in the United States. NVLAP accreditation covering one or more specific testing procedures conducted in accordance with established test standards is awarded upon successful completion of an intensive onsite assessment that includes proficiency testing and documentation review. The ATL NVLAP accreditation currently applies specifically to its ISO 3744 soundpower- level determination procedure (see the photograph) and supporting ISO 17025 quality system, although all ATL operations are conducted in accordance with its quality system. The ATL staff is currently developing additional procedures to adapt this quality system to the testing of space flight hardware in accordance with International Space Station acoustic emission requirements.<

  14. The influence of flooring on environmental stressors: a study of three flooring materials in a hospital.

    PubMed

    Harris, Debra D

    2015-01-01

    Three flooring materials, terrazzo, rubber, and carpet tile, in patient unit corridors were compared for absorption of sound, comfort, light reflectance, employee perceptions and preferences, and patient satisfaction. Environmental stressors, such as noise and ergonomic factors, effect healthcare workers and patients, contributing to increased fatigue, anxiety and stress, decreased productivity, and patient safety and satisfaction. A longitudinal comparative cohort study comparing three types of flooring assessed sound levels, healthcare worker responses, and patient Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) ratings over 42 weeks. A linear mixed model analysis was conducted to determine significant differences between the means for participant responses and objective sound meter data during all three phases of the study. A significant difference was found for sound levels between flooring type for equivalent continuous sound levels. Carpet tile performed better for sound attenuation by absorption, reducing sound levels 3.14 dBA. Preferences for flooring materials changed over the course of the study. The HCAHPS ratings aligned with the sound meter data showing that patients perceived the noise levels to be lower with carpet tiles, improving patient satisfaction ratings. Perceptions for healthcare staff and patients were aligned with the sound meter data. Carpet tile provides sound absorption that affects sound levels and influences occupant's perceptions of environmental factors that contribute to the quality of the indoor environment. Flooring that provides comfort underfoot, easy cleanability, and sound absorption influence healthcare worker job satisfaction and patient satisfaction with their patient experience. © The Author(s) 2015.

  15. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  16. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  17. Acoustic performance of dual-electrode electrostatic sound generators based on CVD graphene on polyimide film.

    PubMed

    Lee, Kyoung-Ryul; Jang, Sung Hwan; Jung, Inhwa

    2018-08-10

    We investigated the acoustic performance of electrostatic sound-generating devices consisting of bi-layer graphene on polyimide film. The total sound pressure level (SPL) of the sound generated from the devices was measured as a function of source frequency by sweeping, and frequency spectra were measured at 1/3 octave band frequencies. The relationship between various operation conditions and total SPL was determined. In addition, the effects of changing voltage level, adding a DC offset, and using two pairs of electrodes were evaluated. It should be noted that two pairs of electrode operations improved sound generation by about 10 dB over all frequency ranges compared with conventional operation. As for the sound-generating capability, total SPL was 70 dBA at 4 kHz when an AC voltage of 100 V pp was applied with a DC offset of 100 V. Acoustic characteristics differed from other types of graphene-based sound generators, such as graphene thermoacoustic devices and graphene polyvinylidene fluoride devices. The effects of diameter and distance between electrodes were also studied, and we found that diameter greatly influenced the frequency response. We anticipate that the design information provided in this paper, in addition to describing key parameters of electrostatic sound-generating devices, will facilitate the commercial development of electrostatic sound-generating systems.

  18. The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes

    PubMed Central

    Gygi, Brian; Shafiro, Valeriy

    2011-01-01

    The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about 5 percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naïve (untrained) listeners showed that this Incongruency Advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of −7.5 dB, but there is about 5 percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the Incongruency Advantage is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. PMID:21355664

  19. Recording and Analysis of Bowel Sounds.

    PubMed

    Zaborski, Daniel; Halczak, Miroslaw; Grzesiak, Wilhelm; Modrzejewski, Andrzej

    2015-01-01

    The aim of this study was to construct an electronic bowel sound recording system and determine its usefulness for the diagnosis of appendicitis, mechanical ileus and diffuse peritonitis. A group of 67 subjects aged 17 to 88 years including 15 controls was examined. Bowel sounds were recorded using an electret microphone placed on the right side of the hypogastrium and connected to a laptop computer. The method of adjustable grids (converted into binary matrices) was used for bowel sounds analysis. Significantly, fewer (p ≤ 0.05) sounds were found in the mechanical ileus (1004.4) and diffuse peritonitis (466.3) groups than in the controls (2179.3). After superimposing adjustable binary matrices on combined sounds (interval between sounds <0.01 s), significant relationships (p ≤ 0.05) were found between particular positions in the matrices (row-column) and the patient groups. These included the A1_T1 and A1_T2 positions and mechanical ileus as well as the A1_T2 and A1_T4 positions and appendicitis. For diffuse peritonitis, significant positions were A5_T4 and A1_T4. Differences were noted in the number of sounds and binary matrices in the groups of patients with acute abdominal diseases. Certain features of bowel sounds characteristic of individual abdominal diseases were indicated. BS: bowel sound; APP: appendicitis; IL: mechanical ileus; PE: diffuse peritonitis; CG: control group; NSI: number of sound impulses; NCI: number of combined sound impulses; MBS: mean bit-similarity; TMIN: minimum time between impulses; TMAX: maximum time between impulses; TMEAN: mean time between impulses. Zaborski D, Halczak M, Grzesiak W, Modrzejewski A. Recording and Analysis of Bowel Sounds. Euroasian J Hepato-Gastroenterol 2015;5(2):67-73.

  20. Design Definition Study Report. Full Crew Interaction Simulator-Laboratory Model (FCIS-LM) (Device X17B7). Volume I. Problem Analysis.

    DTIC Science & Technology

    1978-06-01

    and Sound Levels. Tank sound characteris- tics can be categorized by four areas of tank operation. These are: engine starting and running, mobility or...the use of the ballistic computer system. The indirect sighting and fire control system consists of the elevation quadrant M13A3, a control light source...in low ambient 2-22 temperatures. No controls or indicators are provided for the engine air intake system. The exhaust system has four engine

  1. Instrumentation for detailed bridge-scour measurements

    USGS Publications Warehouse

    Landers, Mark N.; Mueller, David S.; Trent, Roy E.; ,

    1993-01-01

    A portable instrumentation system is being developed to obtain channel bathymetry during floods for detailed bridge-scour measurements. Portable scour measuring systems have four components: sounding instrument, horizontal positioning instrument, deployment mechanisms, and data storage device. The sounding instrument will be a digital fathometer. Horizontal position will be measured using a range-azimuth based hydrographic survey system. The deployment mechanism designed for this system is a remote-controlled boat using a small waterplane area, twin-hull design. An on-board computer and radio will monitor the vessel instrumentation, record measured data, and telemeter data to shore.

  2. Active noise control for infant incubators.

    PubMed

    Yu, Xun; Gujjula, Shruthi; Kuo, Sen M

    2009-01-01

    This paper presents an active noise control system for infant incubators. Experimental results show that global noise reduction can be achieved for infant incubator ANC systems. An audio-integration algorithm is presented to introduce a healthy audio (intrauterine) sound with the ANC system to mask the residual noise and soothe the infant. Carbon nanotube based transparent thin film speaker is also introduced in this paper as the actuator for the ANC system to generate the destructive secondary sound, which can significantly save the congested incubator space and without blocking the view of doctors and nurses.

  3. Flip-Flop Recovery System for sounding rocket payloads

    NASA Technical Reports Server (NTRS)

    Flores, A., Jr.

    1986-01-01

    The design, development, and testing of the Flip-Flop Recovery System, which protects sensitive forward-mounted instruments from ground impact during sounding rocket payload recovery operations, are discussed. The system was originally developed to reduce the impact damage to the expensive gold-plated forward-mounted spectrometers in two existing Taurus-Orion rocket payloads. The concept of the recovery system is simple: the payload is flipped over end-for-end at a predetermined time just after parachute deployment, thus minimizing the risk of damage to the sensitive forward portion of the payload from ground impact.

  4. A combined analytical and numerical analysis of the flow-acoustic coupling in a cavity-pipe system

    NASA Astrophysics Data System (ADS)

    Langthjem, Mikael A.; Nakano, Masami

    2018-05-01

    The generation of sound by flow through a closed, cylindrical cavity (expansion chamber) accommodated with a long tailpipe is investigated analytically and numerically. The sound generation is due to self-sustained flow oscillations in the cavity. These oscillations may, in turn, generate standing (resonant) acoustic waves in the tailpipe. The main interest of the paper is in the interaction between these two sound sources. An analytical, approximate solution of the acoustic part of the problem is obtained via the method of matched asymptotic expansions. The sound-generating flow is represented by a discrete vortex method, based on axisymmetric vortex rings. It is demonstrated through numerical examples that inclusion of acoustic feedback from the tailpipe is essential for a good representation of the sound characteristics.

  5. Opo lidar sounding of trace atmospheric gases in the 3 - 4 μm spectral range

    NASA Astrophysics Data System (ADS)

    Romanovskii, Oleg A.; Sadovnikov, Sergey A.; Kharchenko, Olga V.; Yakovlev, Semen V.

    2018-04-01

    The applicability of a KTA crystal-based laser system with optical parametric oscillators (OPO) generation to lidar sounding of the atmosphere in the spectral range 3-4 μm is studied in this work. A technique developed for lidar sounding of trace atmospheric gases (TAG) is based on differential absorption lidar (DIAL) method and differential optical absorption spectroscopy (DOAS). The DIAL-DOAS technique is tested to estimate its efficiency for lidar sounding of atmospheric trace gases. The numerical simulation performed shows that a KTA-based OPO laser is a promising source of radiation for remote DIAL-DOAS sounding of the TAGs under study along surface tropospheric paths. A possibility of using a PD38-03-PR photodiode for the DIAL gas analysis of the atmosphere is shown.

  6. A low-cost, portable, high-throughput wireless sensor system for phonocardiography applications.

    PubMed

    Sa-Ngasoongsong, Akkarapol; Kunthong, Jakkrit; Sarangan, Venkatesh; Cai, Xinwei; Bukkapatnam, Satish T S

    2012-01-01

    This paper presents the design and testing of a wireless sensor system developed using a Microchip PICDEM developer kit to acquire and monitor human heart sounds for phonocardiography applications. This system can serve as a cost-effective option to the recent developments in wireless phonocardiography sensors that have primarily focused on Bluetooth technology. This wireless sensor system has been designed and developed in-house using off-the-shelf components and open source software for remote and mobile applications. The small form factor (3.75 cm × 5 cm × 1 cm), high throughput (6,000 Hz data streaming rate), and low cost ($13 per unit for a 1,000 unit batch) of this wireless sensor system make it particularly attractive for phonocardiography and other sensing applications. The experimental results of sensor signal analysis using several signal characterization techniques suggest that this wireless sensor system can capture both fundamental heart sounds (S1 and S2), and is also capable of capturing abnormal heart sounds (S3 and S4) and heart murmurs without aliasing. The results of a denoising application using Wavelet Transform show that the undesirable noises of sensor signals in the surrounding environment can be reduced dramatically. The exercising experiment results also show that this proposed wireless PCG system can capture heart sounds over different heart conditions simulated by varying heart rates of six subjects over a range of 60-180 Hz through exercise testing.

  7. A Low-Cost, Portable, High-Throughput Wireless Sensor System for Phonocardiography Applications

    PubMed Central

    Sa-ngasoongsong, Akkarapol; Kunthong, Jakkrit; Sarangan, Venkatesh; Cai, Xinwei; Bukkapatnam, Satish T. S.

    2012-01-01

    This paper presents the design and testing of a wireless sensor system developed using a Microchip PICDEM developer kit to acquire and monitor human heart sounds for phonocardiography applications. This system can serve as a cost-effective option to the recent developments in wireless phonocardiography sensors that have primarily focused on Bluetooth technology. This wireless sensor system has been designed and developed in-house using off-the-shelf components and open source software for remote and mobile applications. The small form factor (3.75 cm × 5 cm × 1 cm), high throughput (6,000 Hz data streaming rate), and low cost ($13 per unit for a 1,000 unit batch) of this wireless sensor system make it particularly attractive for phonocardiography and other sensing applications. The experimental results of sensor signal analysis using several signal characterization techniques suggest that this wireless sensor system can capture both fundamental heart sounds (S1 and S2), and is also capable of capturing abnormal heart sounds (S3 and S4) and heart murmurs without aliasing. The results of a denoising application using Wavelet Transform show that the undesirable noises of sensor signals in the surrounding environment can be reduced dramatically. The exercising experiment results also show that this proposed wireless PCG system can capture heart sounds over different heart conditions simulated by varying heart rates of six subjects over a range of 60–180 Hz through exercise testing. PMID:23112633

  8. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  9. Tinnitus retraining therapy: a different view on tinnitus.

    PubMed

    Jastreboff, Pawel J; Jastreboff, Margaret M

    2006-01-01

    Tinnitus retraining therapy (TRT) is a method for treating tinnitus and decreased sound tolerance, based on the neurophysiological model of tinnitus. This model postulates involvement of the limbic and autonomic nervous systems in all cases of clinically significant tinnitus and points out the importance of both conscious and subconscious connections, which are governed by principles of conditioned reflexes. The treatments for tinnitus and misophonia are based on the concept of extinction of these reflexes, labeled as habituation. TRT aims at inducing changes in the mechanisms responsible for transferring signal (i.e., tinnitus, or external sound in the case of misophonia) from the auditory system to the limbic and autonomic nervous systems, and through this, remove signal-induced reactions without attempting to directly attenuate the tinnitus source or tinnitus/misophonia-evoked reactions. As such, TRT is effective for any type of tinnitus regardless of its etiology. TRT consists of: (1) counseling based on the neurophysiological model of tinnitus, and (2) sound therapy (with or without instrumentation). The main role of counseling is to reclassify tinnitus into the category of neutral stimuli. The role of sound therapy is to decrease the strength of the tinnitus signal. It is crucial to assess and treat tinnitus, decreased sound tolerance, and hearing loss simultaneously. Results from various groups have shown that TRT can be an effective method of treatment. Copyright (c) 2006 S. Karger AG, Basel.

  10. Subjective evaluation and electroacoustic theoretical validation of a new approach to audio upmixing

    NASA Astrophysics Data System (ADS)

    Usher, John S.

    Audio signal processing systems for converting two-channel (stereo) recordings to four or five channels are increasingly relevant. These audio upmixers can be used with conventional stereo sound recordings and reproduced with multichannel home theatre or automotive loudspeaker audio systems to create a more engaging and natural-sounding listening experience. This dissertation discusses existing approaches to audio upmixing for recordings of musical performances and presents specific design criteria for a system to enhance spatial sound quality. A new upmixing system is proposed and evaluated according to these criteria and a theoretical model for its behavior is validated using empirical measurements. The new system removes short-term correlated components from two electronic audio signals using a pair of adaptive filters, updated according to a frequency domain implementation of the normalized-least-means-square algorithm. The major difference of the new system with all extant audio upmixers is that unsupervised time-alignment of the input signals (typically, by up to +/-10 ms) as a function of frequency (typically, using a 1024-band equalizer) is accomplished due to the non-minimum phase adaptive filter. Two new signals are created from the weighted difference of the inputs, and are then radiated with two loudspeakers behind the listener. According to the consensus in the literature on the effect of interaural correlation on auditory image formation, the self-orthogonalizing properties of the algorithm ensure minimal distortion of the frontal source imagery and natural-sounding, enveloping reverberance (ambiance) imagery. Performance evaluation of the new upmix system was accomplished in two ways: Firstly, using empirical electroacoustic measurements which validate a theoretical model of the system; and secondly, with formal listening tests which investigated auditory spatial imagery with a graphical mapping tool and a preference experiment. Both electroacoustic and subjective methods investigated system performance with a variety of test stimuli for solo musical performances reproduced using a loudspeaker in an orchestral concert-hall and recorded using different microphone techniques. The objective and subjective evaluations combined with a comparative study with two commercial systems demonstrate that the proposed system provides a new, computationally practical, high sound quality solution to upmixing.

  11. A classification of marked hijaiyah letters' pronunciation using hidden Markov model

    NASA Astrophysics Data System (ADS)

    Wisesty, Untari N.; Mubarok, M. Syahrul; Adiwijaya

    2017-08-01

    Hijaiyah letters are the letters that arrange the words in Al Qur'an consisting of 28 letters. They symbolize the consonant sounds. On the other hand, the vowel sounds are symbolized by harokat/marks. Speech recognition system is a system used to process the sound signal to be data so that it can be recognized by computer. To build the system, some stages are needed i.e characteristics/feature extraction and classification. In this research, LPC and MFCC extraction method, K-Means Quantization vector and Hidden Markov Model classification are used. The data used are the 28 letters and 6 harakat with the total class of 168. After several are testing done, it can be concluded that the system can recognize the pronunciation pattern of marked hijaiyah letter very well in the training data with its highest accuracy of 96.1% using the feature of LPC extraction and 94% using the MFCC. Meanwhile, when testing system is used, the accuracy decreases up to 41%.

  12. General analytical approach for sound transmission loss analysis through a thick metamaterial plate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oudich, Mourad; Zhou, Xiaoming; Badreddine Assouar, M., E-mail: Badreddine.Assouar@univ-lorraine.fr

    We report theoretically and numerically on the sound transmission loss performance through a thick plate-type acoustic metamaterial made of spring-mass resonators attached to the surface of a homogeneous elastic plate. Two general analytical approaches based on plane wave expansion were developed to calculate both the sound transmission loss through the metamaterial plate (thick and thin) and its band structure. The first one can be applied to thick plate systems to study the sound transmission for any normal or oblique incident sound pressure. The second approach gives the metamaterial dispersion behavior to describe the vibrational motions of the plate, which helpsmore » to understand the physics behind sound radiation through air by the structure. Computed results show that high sound transmission loss up to 72 dB at 2 kHz is reached with a thick metamaterial plate while only 23 dB can be obtained for a simple homogeneous plate with the same thickness. Such plate-type acoustic metamaterial can be a very effective solution for high performance sound insulation and structural vibration shielding in the very low-frequency range.« less

  13. Evaluation strategy : Puget Sound regional fare card : FY01 earmark evaluation

    DOT National Transportation Integrated Search

    2003-06-24

    King County Metro Transit is the lead agency responsible for implementing the Central Puget Sound Regional Fare Coordination Project (RFC Project). The project features a smart card technology that will support and link the fare collection systems of...

  14. Evaluation of Diesel Engine Performance with Intake and Exhaust System Throttling : Volume 2. Appendix 1.

    DOT National Transportation Integrated Search

    1975-11-01

    The appendix to the preceding volume presents the data for the subject diesel engine noise study, including an engine sound power level analysis and sound spectrums showing the effect of intake and exhaust restrictions.

  15. Causal feedforward control of a stochastically excited fuselage structure with active sidewall panel.

    PubMed

    Misol, Malte; Haase, Thomas; Monner, Hans Peter; Sinapius, Michael

    2014-10-01

    This paper provides experimental results of an aircraft-relevant double panel structure mounted in a sound transmission loss facility. The primary structure of the double panel system is excited either by a stochastic point force or by a diffuse sound field synthesized in the reverberation room of the transmission loss facility. The secondary structure, which is connected to the frames of the primary structure, is augmented by actuators and sensors implementing an active feedforward control system. Special emphasis is placed on the causality of the active feedforward control system and its implications on the disturbance rejection at the error sensors. The coherence of the sensor signals is analyzed for the two different disturbance excitations. Experimental results are presented regarding the causality, coherence, and disturbance rejection of the active feedforward control system. Furthermore, the sound transmission loss of the double panel system is evaluated for different configurations of the active system. A principal result of this work is the evidence that it is possible to strongly influence the transmission of stochastic disturbance sources through double panel configurations by means of an active feedforward control system.

  16. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    NASA Technical Reports Server (NTRS)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  17. Aircraft laser sensing of sound velocity in water - Brillouin scattering

    NASA Technical Reports Server (NTRS)

    Hickman, G. D.; Harding, John M.; Carnes, Michael; Pressman, AL; Kattawar, George W.; Fry, Edward S.

    1991-01-01

    A real-time data source for sound speed in the upper 100 m has been proposed for exploratory development. This data source is planned to be generated via a ship- or aircraft-mounted optical pulsed laser using the spontaneous Brillouin scattering technique. The system should be capable (from a single 10 ns 500 mJ pulse) of yielding range resolved sound speed profiles in water to depths of 75-100 m to an accuracy of 1 m/s. The 100 m profiles will provide the capability of rapidly monitoring the upper-ocean vertical structure. They will also provide an extensive, subsurface-data source for existing real-time, operational ocean nowcast/forecast systems.

  18. Propagation and Signal Modeling

    NASA Astrophysics Data System (ADS)

    Jensen, Finn B.

    The use of sound in the sea is ubiquitous: Apart from the military aspect of trying to detect an adversary’s mines and submarines, ship-mounted sonars measure water depth, ship speed, and the presence of fish shoals. Side-scan systems are used for mapping the bottom topography, sub-bottom profilers for getting information about the deeper layering, and other sonar systems for locating pipelines and cables on the seafloor. Sound is also used for navigating submerged vehicles, for underwater communications and for tracking marine mammals. Finally, in the realm of ‘acoustical oceanography’ and ‘ocean acoustic tomography,’ sound is used for measuring physical parameters of the ocean environment and for monitoring oceanic processes [1-6].

  19. Ares I Scale Model Acoustic Test Above Deck Water Sound Suppression Results

    NASA Technical Reports Server (NTRS)

    Counter, Douglas D.; Houston, Janice D.

    2011-01-01

    The Ares I Scale Model Acoustic Test (ASMAT) program test matrix was designed to determine the acoustic reduction for the Liftoff acoustics (LOA) environment with an above deck water sound suppression system. The scale model test can be used to quantify the effectiveness of the water suppression system as well as optimize the systems necessary for the LOA noise reduction. Several water flow rates were tested to determine which rate provides the greatest acoustic reductions. Preliminary results are presented.

  20. Electric and kinematic structure of the Oklahoma mesoscale convective system of 7 June 1989

    NASA Technical Reports Server (NTRS)

    Hunter, Steven M.; Schur, Terry J.; Marshall, Thomas C.; Rust, W. D.

    1992-01-01

    Balloon soundings of electric field in Oklahoma mesoscale convective systems (MCS) were obtained by the National Severe Storms Laboratory in the spring of 1989. This study focuses on a sounding made in the rearward edge of an MCS stratiform rain area on 7 June 1989. Data from Doppler radars, a lightning ground-strike location system, satellite, and other sources is used to relate the mesoscale attributes of the MCS to the observed electric-field profile.

  1. Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.

    PubMed

    Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J

    2016-03-01

    Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Sound level-dependent growth of N1m amplitude with low and high-frequency tones.

    PubMed

    Soeta, Yoshiharu; Nakagawa, Seiji

    2009-04-22

    The aim of this study was to determine whether the amplitude and/or latency of the N1m deflection of auditory-evoked magnetic fields are influenced by the level and frequency of sound. The results indicated that the amplitude of the N1m increased with sound level. The growth in amplitude with increasing sound level was almost constant with low frequencies (250-1000 Hz); however, this growth decreased with high frequencies (>2000 Hz). The behavior of the amplitude may reflect a difference in the increase in the activation of the peripheral and/or central auditory systems.

  3. Digital data-acquisition system for measuring the free decay of acoustical standing waves in a resonant tube

    NASA Technical Reports Server (NTRS)

    Meredith, R. W.; Zuckerwar, A. J.

    1984-01-01

    A low-cost digital system based on an 8-bit Apple II microcomputer has been designed to provide on-line control, data acquisition, and evaluation of sound absorption measurements in gases. The measurements are conducted in a resonant tube, in which an acoustical standing wave is excited, the excitation removed, and the sound absorption evaluated from the free decay envelope. The free decay is initiated from the computer keyboard after the standing wave is established, and the microphone response signal is the source of the analog signal for the A/D converter. The acquisition software is written in ASSEMBLY language and the evaluation software in BASIC. This paper describes the acoustical measurement, hardware, software, and system performance and presents measurements of sound absorption in air as an example.

  4. Corollary discharge provides the sensory content of inner speech.

    PubMed

    Scott, Mark

    2013-09-01

    Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one's external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one's voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.

  5. Cognitive load of navigating without vision when guided by virtual sound versus spatial language.

    PubMed

    Klatzky, Roberta L; Marston, James R; Giudice, Nicholas A; Golledge, Reginald G; Loomis, Jack M

    2006-12-01

    A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight") or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.

  6. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  7. Acoustic contrast, planarity and robustness of sound zone methods using a circular loudspeaker array.

    PubMed

    Coleman, Philip; Jackson, Philip J B; Olik, Marek; Møller, Martin; Olsen, Martin; Abildgaard Pedersen, Jan

    2014-04-01

    Since the mid 1990s, acoustics research has been undertaken relating to the sound zone problem-using loudspeakers to deliver a region of high sound pressure while simultaneously creating an area where the sound is suppressed-in order to facilitate independent listening within the same acoustic enclosure. The published solutions to the sound zone problem are derived from areas such as wave field synthesis and beamforming. However, the properties of such methods differ and performance tends to be compared against similar approaches. In this study, the suitability of energy focusing, energy cancelation, and synthesis approaches for sound zone reproduction is investigated. Anechoic simulations based on two zones surrounded by a circular array show each of the methods to have a characteristic performance, quantified in terms of acoustic contrast, array control effort and target sound field planarity. Regularization is shown to have a significant effect on the array effort and achieved acoustic contrast, particularly when mismatched conditions are considered between calculation of the source weights and their application to the system.

  8. Passive Environmental ASW Prediction System (PEAPS)

    DTIC Science & Technology

    1975-03-01

    Because the Frye and Pugh equation [1] for sound speed is dominated by temperature terms and requires relatively few program steps compared with...other speed of sound equations , it was used in the sound speed profile sub- program . The equation was modified to use the approximation ASS ASS AP • ASS AZ...in ppt (parts per thousand). 21 The SSP sub- program converts the input data to MKS units for use in the above equation and then converts the resultant

  9. Operational Risk Management is Ineffective at Addressing Nonlinear Problems

    DTIC Science & Technology

    2009-02-20

    brains are not linear: even though the sound of an oboe and the sound of a string section may be independent when they enter your ear, the emotional...impact of both sounds together may be very much greater than either one alone. (This is what keeps symphony orchestras in business .) Nor is the...involving people. “In nonlinear systems... chaos theory tells you that the slightest uncertainty in your knowledge of the initial conditions will often

  10. The Shock and Vibration Digest. Volume 15, Number 7

    DTIC Science & Technology

    1983-07-01

    systems noise -- for tant analytical tool, the statistical energy analysis example, from a specific metal, chain driven, con- method, has been the subject...34Experimental Determination of Vibration Parameters Re- ~~~quired in the Statistical Energy Analysis Meth- .,i. 31. Dubowsky, S. and Morris, T.L., "An...34Coupling Loss Factors for 55. Upton, R., "Sound Intensity -. A Powerful New Statistical Energy Analysis of Sound Trans- Measurement Tool," S/V, Sound

  11. Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style

    PubMed Central

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception. PMID:22242169

  12. Active control of noise on the source side of a partition to increase its sound isolation

    NASA Astrophysics Data System (ADS)

    Tarabini, Marco; Roure, Alain; Pinhede, Cedric

    2009-03-01

    This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.

  13. Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces

    NASA Astrophysics Data System (ADS)

    Montazeri, Allahyar; Taylor, C. James

    2017-10-01

    This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.

  14. A Wireless Electronic Esophageal Stethoscope for Continuous Monitoring of Cardiovascular and Respiratory Systems during Anaesthesia

    PubMed Central

    Parsaei, H.; Vakily, A.; Shafiei, A.M.

    2017-01-01

    Background: The basic requirements for monitoring anesthetized patients during surgery are assessing cardiac and respiratory function. Esophageal stethoscopes have been developed for this purpose, but these devices may not provide clear heart and lung sound due to existence of various noises in operating rooms. In addition, the stethoscope is not applicable for continues monitoring, and it is unsuitable for observing inaccessible patients in some conditions such as during CT scan. Objective: A wireless electronic esophageal stethoscope is designed for continues auscultation of heart and lung sounds in anesthetized patients. The system consists of a transmitter and a receiver. The former acquires, amplifies and transmits the acquired sound signals to the latter via a frequency modulation transmitter. The receiver demodulates, amplifies, and delivers the received signal to a headphone to be heard by anesthesiologist. Results: The usability and effectiveness of the designed system was qualitatively evaluated by 5 anesthesiologists in Namazi Hospital and Shahid Chamran Hospital, Shiraz, Iran on 30 patients in several operating rooms in different conditions; e.g., when electro surgery instruments are working. Fortunately, the experts on average ranked good quality for the heard heart and lung sounds and very good on the user friendly being of the instrument. Conclusion: Evaluation results demonstrate that the developed system is capable of capturing and transmitting heart and lung sounds successfully. Therefore, it can be used to continuously monitor anesthetized patients’ cardiac and respiratory function. Since via the instrument wireless auscultation is possible, it could be suitable for observing inaccessible patients in several conditions such as during CT scan. PMID:28451580

  15. Factors regulating early life history dispersal of Atlantic cod (Gadus morhua) from coastal Newfoundland.

    PubMed

    Stanley, Ryan R E; deYoung, Brad; Snelgrove, Paul V R; Gregory, Robert S

    2013-01-01

    To understand coastal dispersal dynamics of Atlantic cod (Gadus morhua), we examined spatiotemporal egg and larval abundance patterns in coastal Newfoundland. In recent decades, Smith Sound, Trinity Bay has supported the largest known overwintering spawning aggregation of Atlantic cod in the region. We estimated spawning and dispersal characteristics for the Smith Sound-Trinity Bay system by fitting ichthyoplankton abundance data to environmentally-driven, simplified box models. Results show protracted spawning, with sharply increased egg production in early July, and limited dispersal from the Sound. The model for the entire spawning season indicates egg export from Smith Sound is 13%•day(-1) with a net mortality of 27%•day(-1). Eggs and larvae are consistently found in western Trinity Bay with little advection from the system. These patterns mirror particle tracking models that suggest residence times of 10-20 days, and circulation models indicating local gyres in Trinity Bay that act in concert with upwelling dynamics to retain eggs and larvae. Our results are among the first quantitative dispersal estimates from Smith Sound, linking this spawning stock to the adjacent coastal waters. These results illustrate the biophysical interplay regulating dispersal and connectivity originating from inshore spawning of coastal northwest Atlantic.

  16. Visualization of Heart Sounds and Motion Using Multichannel Sensor

    NASA Astrophysics Data System (ADS)

    Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko

    2010-06-01

    As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.

  17. Application of the Fourier pseudospectral time-domain method in orthogonal curvilinear coordinates for near-rigid moderately curved surfaces.

    PubMed

    Hornikx, Maarten; Dragna, Didier

    2015-07-01

    The Fourier pseudospectral time-domain method is an efficient wave-based method to model sound propagation in inhomogeneous media. One of the limitations of the method for atmospheric sound propagation purposes is its restriction to a Cartesian grid, confining it to staircase-like geometries. A transform from the physical coordinate system to the curvilinear coordinate system has been applied to solve more arbitrary geometries. For applicability of this method near the boundaries, the acoustic velocity variables are solved for their curvilinear components. The performance of the curvilinear Fourier pseudospectral method is investigated in free field and for outdoor sound propagation over an impedance strip for various types of shapes. Accuracy is shown to be related to the maximum grid stretching ratio and deformation of the boundary shape and computational efficiency is reduced relative to the smallest grid cell in the physical domain. The applicability of the curvilinear Fourier pseudospectral time-domain method is demonstrated by investigating the effect of sound propagation over a hill in a nocturnal boundary layer. With the proposed method, accurate and efficient results for sound propagation over smoothly varying ground surfaces with high impedances can be obtained.

  18. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  19. Towards direct realisation of the SI unit of sound pressure in the audible hearing range based on optical free-field acoustic particle measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koukoulas, Triantafillos, E-mail: triantafillos.koukoulas@npl.co.uk; Piper, Ben

    Since the introduction of the International System of Units (the SI system) in 1960, weights, measures, standardised approaches, procedures, and protocols have been introduced, adapted, and extensively used. A major international effort and activity concentrate on the definition and traceability of the seven base SI units in terms of fundamental constants, and consequently those units that are derived from the base units. In airborne acoustical metrology and for the audible range of frequencies up to 20 kHz, the SI unit of sound pressure, the pascal, is realised indirectly and without any knowledge or measurement of the sound field. Though themore » principle of reciprocity was originally formulated by Lord Rayleigh nearly two centuries ago, it was devised in the 1940s and eventually became a calibration standard in the 1960s; however, it can only accommodate a limited number of acoustic sensors of specific types and dimensions. International standards determine the device sensitivity either through coupler or through free-field reciprocity but rely on the continuous availability of specific acoustical artefacts. Here, we show an optical method based on gated photon correlation spectroscopy that can measure sound pressures directly and absolutely in fully anechoic conditions, remotely, and without disturbing the propagating sound field. It neither relies on the availability or performance of any measurement artefact nor makes any assumptions of the device geometry and sound field characteristics. Most importantly, the required units of sound pressure and microphone sensitivity may now be experimentally realised, thus providing direct traceability to SI base units.« less

  20. The Effects of Bimodal (Sound-Light) Stimulus Presentation on Selective Responding of Deaf-Blind Multihandicapped Children.

    ERIC Educational Resources Information Center

    Knight, Marcia S.; Rosenblatt, Laurence

    1983-01-01

    Fourteen severely multiply handicapped children with rubella syndrome, six to 16 years of age, were examined with the PLAYTEST system, an operant test procedure using sound and light as stimuli and reinforcers. (Author/MC)

  1. Authoritative Authoring: Software That Makes Multimedia Happen.

    ERIC Educational Resources Information Center

    Florio, Chris; Murie, Michael

    1996-01-01

    Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)

  2. Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A): Instrumentation interface control document

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This Interface Control Document (ICD) defines the specific details of the complete accomodation information between the Earth Observing System (EOS) PM Spacecraft and the Advanced Microwave Sounding Unit (AMSU-A)Instrument. This is the first submittal of the ICN: it will be updated periodically throughout the life of the program. The next update is planned prior to Critical Design Review (CDR).

  3. Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes

    PubMed Central

    Hernández-Montes, Maria del Socorro; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza

    2009-01-01

    Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. This paper presents advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full-field-of-view characterization of nanometer scale sound-induced displacements of the surface of the TM at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical-research environment to address basic-science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad-frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and cadaveric human samples are shown, and their potential utility discussed. PMID:19566316

  4. Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes

    NASA Astrophysics Data System (ADS)

    Del Socorro Hernández-Montes, Maria; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza

    2009-05-01

    Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. We present advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full field-of-view characterization of nanometer-scale sound-induced displacements of the TM surface at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical research environment to address basic science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and human cadaver samples are shown, and their potential utility is discussed.

  5. Acoustic agglomeration of fine particles based on a high intensity acoustical resonator

    NASA Astrophysics Data System (ADS)

    Zhao, Yun; Zeng, Xinwu; Tian, Zhangfu

    2015-10-01

    Acoustic agglomeration (AA) is considered to be a promising method for reducing the air pollution caused by fine aerosol particles. Removal efficiency and energy consuming are primary parameters and generally have a conflict with each other for the industry applications. It was proved that removal efficiency is increased with sound intensity and optimal frequency is presented for certain polydisperse aerosol. As a result, a high efficiency and low energy cost removal system was constructed using acoustical resonance. High intensity standing wave is generated by a tube system with abrupt section driven by four loudspeakers. Numerical model of the tube system was built base on the finite element method, and the resonance condition and SPL increase were confirmd. Extensive tests were carried out to investigate the acoustic field in the agglomeration chamber. Removal efficiency of fine particles was tested by the comparison of filter paper mass and particle size distribution at different operating conditions including sound pressure level (SPL), and frequency. The experimental study has demonstrated that agglomeration increases with sound pressure level. Sound pressure level in the agglomeration chamber is between 145 dB and 165 dB from 500 Hz to 2 kHz. The resonance frequency can be predicted with the quarter tube theory. Sound pressure level gain of more than 10 dB is gained at resonance frequency. With the help of high intensity sound waves, fine particles are reduced greatly, and the AA effect is enhanced at high SPL condition. The optimal frequency is 1.1kHz for aerosol generated by coal ash. In the resonace tube, higher resonance frequencies are not the integral multiplies of the first one. As a result, Strong nonlinearity is avoided by the dissonant characteristic and shock wave is not found in the testing results. The mechanism and testing system can be used effectively in industrial processes in the future.

  6. Acoustic Holography

    NASA Astrophysics Data System (ADS)

    Kim, Yang-Hann

    One of the subtle problems that make noise control difficult for engineers is the invisibility of noise or sound. A visual image of noise often helps to determine an appropriate means for noise control. There have been many attempts to fulfill this rather challenging objective. Theoretical (or numerical) means for visualizing the sound field have been attempted, and as a result, a great deal of progress has been made. However, most of these numerical methods are not quite ready for practical applications to noise control problems. In the meantime, rapid progress with instrumentation has made it possible to use multiple microphones and fast signal-processing systems. Although these systems are not perfect, they are useful. A state-of-the-art system has recently become available, but it still has many problematic issues; for example, how can one implement the visualized noise field. The constructed noise or sound picture always consists of bias and random errors, and consequently, it is often difficult to determine the origin of the noise and the spatial distribution of the noise field. Section 26.2 of this chapter introduces a brief history, which is associated with "sound visualization," acoustic source identification methods and what has been accomplished with a line or surface array. Section 26.2.3 introduces difficulties and recent studies, including de-Dopplerization and de-reverberation methods, both essentialfor visualizing a moving noise source, such as occurs for cars or trains. This section also addresses what produces ambiguity in realizing real sound sources in a room or closed space. Another major issue associated with sound/noise visualization is whether or not we can distinguish between mutual dependencies of noise in space (Sect. 26.2.4); for example, we are asked to answer the question, "Can we see two birds singing or one bird with two beaks?"

  7. Acoustic Holography

    NASA Astrophysics Data System (ADS)

    Kim, Yang-Hann

    One of the subtle problems that make noise control difficult for engineers is the invisibility of noise or sound. A visual image of noise often helps to determine an appropriate means for noise control. There have been many attempts to fulfill this rather challenging objective. Theoretical (or numerical) means for visualizing the sound field have been attempted, and as a result, a great deal of progress has been made. However, most of these numerical methods are not quite ready for practical applications to noise control problems. In the meantime, rapid progress with instrumentation has made it possible to use multiple microphones and fast signal-processing systems. Although these systems are not perfect, they are useful. A state-of-the-art system has recently become available, but it still has many problematic issues; for example, how can one implement the visualized noise field. The constructed noise or sound picture always consists of bias and random errors, and consequently, it is often difficult to determine the origin of the noise and the spatial distribution of the noise field. Section 26.2 of this chapter introduces a brief history, which is associated with sound visualization, acoustic source identification methods and what has been accomplished with a line or surface array. Section 26.2.3 introduces difficulties and recent studies, including de-Dopplerization and de-re verberation methods, both essential for visualizing a moving noise source, such as occurs for cars or trains. This section also addresses what produces ambiguity in realizing real sound sources in a room or closed space. Another major issue associated with sound/noise visualization is whether or not we can distinguish between mutual dependencies of noise in space (Sect. 26.2.4); for example, we are asked to answer the question, Can we see two birds singing or one bird with two beaks?

  8. Assessing the accuracy of microwave radiometers and radio acoustic sounding systems for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bianco, Laura; Friedrich, Katja; Wilczak, James M.

    To assess current remote-sensing capabilities for wind energy applications, a remote-sensing system evaluation study, called XPIA (eXperimental Planetary boundary layer Instrument Assessment), was held in the spring of 2015 at NOAA's Boulder Atmospheric Observatory (BAO) facility. Several remote-sensing platforms were evaluated to determine their suitability for the verification and validation processes used to test the accuracy of numerical weather prediction models.The evaluation of these platforms was performed with respect to well-defined reference systems: the BAO's 300 m tower equipped at six levels (50, 100, 150, 200, 250, and 300 m) with 12 sonic anemometers and six temperature ( T) andmore » relative humidity (RH) sensors; and approximately 60 radiosonde launches.In this study we first employ these reference measurements to validate temperature profiles retrieved by two co-located microwave radiometers (MWRs) as well as virtual temperature ( T v) measured by co-located wind profiling radars equipped with radio acoustic sounding systems (RASSs). Results indicate a mean absolute error (MAE) in the temperature retrieved by the microwave radiometers below 1.5 K in the lowest 5?km of the atmosphere and a mean absolute error in the virtual temperature measured by the radio acoustic sounding systems below 0.8 K in the layer of the atmosphere covered by these measurements (up to approximately 1.6-2 km). We also investigated the benefit of the vertical velocity correction applied to the speed of sound before computing the virtual temperature by the radio acoustic sounding systems. We find that using this correction frequently increases the RASS error, and that it should not be routinely applied to all data.Water vapor density (WVD) profiles measured by the MWRs were also compared with similar measurements from the soundings, showing the capability of MWRs to follow the vertical profile measured by the sounding and finding a mean absolute error below 0.5 g m -3 in the lowest 5 km of the atmosphere. However, the relative humidity profiles measured by the microwave radiometer lack the high-resolution details available from radiosonde profiles. Furthermore, an encouraging and significant finding of this study was that the coefficient of determination between the lapse rate measured by the microwave radiometer and the tower measurements over the tower levels between 50 and 300 m ranged from 0.76 to 0.91, proving that these remote-sensing instruments can provide accurate information on atmospheric stability conditions in the lower boundary layer.« less

  9. Assessing the accuracy of microwave radiometers and radio acoustic sounding systems for wind energy applications

    DOE PAGES

    Bianco, Laura; Friedrich, Katja; Wilczak, James M.; ...

    2017-05-09

    To assess current remote-sensing capabilities for wind energy applications, a remote-sensing system evaluation study, called XPIA (eXperimental Planetary boundary layer Instrument Assessment), was held in the spring of 2015 at NOAA's Boulder Atmospheric Observatory (BAO) facility. Several remote-sensing platforms were evaluated to determine their suitability for the verification and validation processes used to test the accuracy of numerical weather prediction models.The evaluation of these platforms was performed with respect to well-defined reference systems: the BAO's 300 m tower equipped at six levels (50, 100, 150, 200, 250, and 300 m) with 12 sonic anemometers and six temperature ( T) andmore » relative humidity (RH) sensors; and approximately 60 radiosonde launches.In this study we first employ these reference measurements to validate temperature profiles retrieved by two co-located microwave radiometers (MWRs) as well as virtual temperature ( T v) measured by co-located wind profiling radars equipped with radio acoustic sounding systems (RASSs). Results indicate a mean absolute error (MAE) in the temperature retrieved by the microwave radiometers below 1.5 K in the lowest 5?km of the atmosphere and a mean absolute error in the virtual temperature measured by the radio acoustic sounding systems below 0.8 K in the layer of the atmosphere covered by these measurements (up to approximately 1.6-2 km). We also investigated the benefit of the vertical velocity correction applied to the speed of sound before computing the virtual temperature by the radio acoustic sounding systems. We find that using this correction frequently increases the RASS error, and that it should not be routinely applied to all data.Water vapor density (WVD) profiles measured by the MWRs were also compared with similar measurements from the soundings, showing the capability of MWRs to follow the vertical profile measured by the sounding and finding a mean absolute error below 0.5 g m -3 in the lowest 5 km of the atmosphere. However, the relative humidity profiles measured by the microwave radiometer lack the high-resolution details available from radiosonde profiles. Furthermore, an encouraging and significant finding of this study was that the coefficient of determination between the lapse rate measured by the microwave radiometer and the tower measurements over the tower levels between 50 and 300 m ranged from 0.76 to 0.91, proving that these remote-sensing instruments can provide accurate information on atmospheric stability conditions in the lower boundary layer.« less

  10. Assessing the accuracy of microwave radiometers and radio acoustic sounding systems for wind energy applications

    NASA Astrophysics Data System (ADS)

    Bianco, Laura; Friedrich, Katja; Wilczak, James M.; Hazen, Duane; Wolfe, Daniel; Delgado, Ruben; Oncley, Steven P.; Lundquist, Julie K.

    2017-05-01

    To assess current remote-sensing capabilities for wind energy applications, a remote-sensing system evaluation study, called XPIA (eXperimental Planetary boundary layer Instrument Assessment), was held in the spring of 2015 at NOAA's Boulder Atmospheric Observatory (BAO) facility. Several remote-sensing platforms were evaluated to determine their suitability for the verification and validation processes used to test the accuracy of numerical weather prediction models.The evaluation of these platforms was performed with respect to well-defined reference systems: the BAO's 300 m tower equipped at six levels (50, 100, 150, 200, 250, and 300 m) with 12 sonic anemometers and six temperature (T) and relative humidity (RH) sensors; and approximately 60 radiosonde launches.In this study we first employ these reference measurements to validate temperature profiles retrieved by two co-located microwave radiometers (MWRs) as well as virtual temperature (Tv) measured by co-located wind profiling radars equipped with radio acoustic sounding systems (RASSs). Results indicate a mean absolute error (MAE) in the temperature retrieved by the microwave radiometers below 1.5 K in the lowest 5 km of the atmosphere and a mean absolute error in the virtual temperature measured by the radio acoustic sounding systems below 0.8 K in the layer of the atmosphere covered by these measurements (up to approximately 1.6-2 km). We also investigated the benefit of the vertical velocity correction applied to the speed of sound before computing the virtual temperature by the radio acoustic sounding systems. We find that using this correction frequently increases the RASS error, and that it should not be routinely applied to all data.Water vapor density (WVD) profiles measured by the MWRs were also compared with similar measurements from the soundings, showing the capability of MWRs to follow the vertical profile measured by the sounding and finding a mean absolute error below 0.5 g m-3 in the lowest 5 km of the atmosphere. However, the relative humidity profiles measured by the microwave radiometer lack the high-resolution details available from radiosonde profiles. An encouraging and significant finding of this study was that the coefficient of determination between the lapse rate measured by the microwave radiometer and the tower measurements over the tower levels between 50 and 300 m ranged from 0.76 to 0.91, proving that these remote-sensing instruments can provide accurate information on atmospheric stability conditions in the lower boundary layer.

  11. Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae

    PubMed Central

    Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063

  12. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  13. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    PubMed

    Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  14. A data-assimilative ocean forecasting system for the Prince William sound and an evaluation of its performance during sound Predictions 2009

    NASA Astrophysics Data System (ADS)

    Farrara, John D.; Chao, Yi; Li, Zhijin; Wang, Xiaochun; Jin, Xin; Zhang, Hongchun; Li, Peggy; Vu, Quoc; Olsson, Peter Q.; Schoch, G. Carl; Halverson, Mark; Moline, Mark A.; Ohlmann, Carter; Johnson, Mark; McWilliams, James C.; Colas, Francois A.

    2013-07-01

    The development and implementation of a three-dimensional ocean modeling system for the Prince William Sound (PWS) is described. The system consists of a regional ocean model component (ROMS) forced by output from a regional atmospheric model component (the Weather Research and Forecasting Model, WRF). The ROMS ocean model component has a horizontal resolution of 1km within PWS and utilizes a recently-developed multi-scale 3DVAR data assimilation methodology along with freshwater runoff from land obtained via real-time execution of a digital elevation model. During the Sound Predictions Field Experiment (July 19-August 3, 2009) the system was run in real-time to support operations and incorporated all available real-time streams of data. Nowcasts were produced every 6h and a 48-h forecast was performed once a day. In addition, a sixteen-member ensemble of forecasts was executed on most days. All results were published at a web portal (http://ourocean.jpl.nasa.gov/PWS) in real time to support decision making.The performance of the system during Sound Predictions 2009 is evaluated. The ROMS results are first compared with the assimilated data as a consistency check. RMS differences of about 0.7°C were found between the ROMS temperatures and the observed vertical profiles of temperature that are assimilated. The ROMS salinities show greater discrepancies, tending to be too salty near the surface. The overall circulation patterns observed throughout the Sound are qualitatively reproduced, including the following evolution in time. During the first week of the experiment, the weather was quite stormy with strong southeasterly winds. This resulted in strong north to northwestward surface flow in much of the central PWS. Both the observed drifter trajectories and the ROMS nowcasts showed strong surface inflow into the Sound through the Hinchinbrook Entrance and strong generally northward to northwestward flow in the central Sound that was exiting through the Knight Island Passage and Montague Strait entrance. During the latter part of the second week when surface winds were light and southwesterly, the mean surface flow at the Hinchinbrook Entrance reversed to weak outflow and a cyclonic eddy formed in the central Sound. Overall, RMS differences between ROMS surface currents and observed HF radar surface currents in the central Sound were generally between 5 and 10cm/s, about 20-40% of the time mean current speeds.The ROMS reanalysis is then validated against independent observations. A comparison of the ROMS currents with observed vertical current profiles from moored ADCPs in the Hinchinbrook Entrance and Montague Strait shows good qualitative agreement and confirms the evolution of the near surface inflow/outflow at these locations described above. A comparison of the ROMS surface currents with drifter trajectories provided additional confirmation that the evolution of the surface flow described above was realistic. Forecasts of drifter locations had RMS errors of less than 10km for up to 36h. One and two-day forecasts of surface temperature, salinity and current fields were more skillful than persistence forecasts. In addition, ensemble mean forecasts were found to be slightly more skillful than single forecasts. Two case studies demonstrated the system's qualitative skill in predicting subsurface changes within the mixed layer measured by ships and autonomous underwater vehicles. In summary, the system is capable of producing a realistic evolution of the near-surface circulation within PWS including forecasts of up to two days of this evolution. Use of the products provided by the system during the experiment as part of the asset deployment decision making process demonstrated the value of accurate regional ocean forecasts in support of field experiments.

  15. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  16. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE PAGES

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    2017-02-04

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  17. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  18. A real-time biomimetic acoustic localizing system using time-shared architecture

    NASA Astrophysics Data System (ADS)

    Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn

    2008-04-01

    In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.

  19. A causal test of the motor theory of speech perception: A case of impaired speech production and spared speech perception

    PubMed Central

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E.; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z.

    2015-01-01

    In the last decade, the debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. However, the exact role of the motor system in auditory speech processing remains elusive. Here we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. The patient’s spontaneous speech was marked by frequent phonological/articulatory errors, and those errors were caused, at least in part, by motor-level impairments with speech production. We found that the patient showed a normal phonemic categorical boundary when discriminating two nonwords that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the nonword stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labeling impairment. These data suggest that the identification (i.e. labeling) of nonword speech sounds may involve the speech motor system, but that the perception of speech sounds (i.e., discrimination) does not require the motor system. This means that motor processes are not causally involved in perception of the speech signal, and suggest that the motor system may be used when other cues (e.g., meaning, context) are not available. PMID:25951749

  20. Human knee joint sound during the Lachman test: Comparison between healthy and anterior cruciate ligament-deficient knees.

    PubMed

    Tanaka, Kazunori; Ogawa, Munehiro; Inagaki, Yusuke; Tanaka, Yasuhito; Nishikawa, Hitoshi; Hattori, Koji

    2017-05-01

    The Lachman test is clinically considered to be a reliable physical examination for anterior cruciate ligament (ACL) deficiency. However, the test involves subjective judgement of differences in tibial translation and endpoint quality. An auscultation system has been developed to allow assessment of the Lachman test. The knee joint sound during the Lachman test was analyzed using fast Fourier transformation. The purpose of the present study was to quantitatively evaluate knee joint sounds in healthy and ACL-deficient human knees. Sixty healthy volunteers and 24 patients with ACL injury were examined. The Lachman test with joint auscultation was evaluated using a microphone. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of the Lachman sound, the peak sound (Lachman peak sound) as the maximum relative amplitude (acoustic pressure) and its frequency were used. In healthy volunteers, the mean Lachman peak sound of intact knees was 100.6 Hz in frequency and -45 dB in acoustic pressure. Moreover, a sex difference was found in the frequency of the Lachman peak sound. In patients with ACL injury, the frequency of the Lachman peak sound of the ACL-deficient knees was widely dispersed. In the ACL-deficient knees, the mean Lachman peak sound was 306.8 Hz in frequency and -63.1 dB in acoustic pressure. If the reference range was set at the frequency of the healthy volunteer Lachman peak sound, the sensitivity, specificity, positive predictive value, and negative predictive value were 83.3%, 95.6%, 95.2%, and 85.2%, respectively. Knee joint auscultation during the Lachman test was capable of judging ACL deficiency on the basis of objective data. In particular, the frequency of the Lachman peak sound was able to assess ACL condition. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  1. Human emotions track changes in the acoustic environment.

    PubMed

    Ma, Weiyi; Thompson, William Forde

    2015-11-24

    Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin's hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds.

  2. Effects of sound level fluctuations on annoyance caused by aircraft-flyover noise

    NASA Technical Reports Server (NTRS)

    Mccurdy, D. A.

    1979-01-01

    A laboratory experiment was conducted to determine the effects of variations in the rate and magnitude of sound level fluctuations on the annoyance caused by aircraft-flyover noise. The effects of tonal content, noise duration, and sound pressure level on annoyance were also studied. An aircraft-noise synthesis system was used to synthesize 32 aircraft-flyover noise stimuli representing the factorial combinations of 2 tone conditions, 2 noise durations, 2 sound pressure levels, 2 level fluctuation rates, and 2 level fluctuation magnitudes. Thirty-two test subjects made annoyance judgements on a total of 64 stimuli in a subjective listening test facility simulating an outdoor acoustic environment. Variations in the rate and magnitude of level fluctuations were found to have little, if any, effect on annoyance. Tonal content, noise duration, sound pressure level, and the interaction of tonal content with sound pressure level were found to affect the judged annoyance significantly. The addition of tone corrections and/or duration corrections significantly improved the annoyance prediction ability of noise rating scales.

  3. Techniques and instrumentation for the measurement of transient sound energy flux

    NASA Astrophysics Data System (ADS)

    Watkinson, P. S.; Fahy, F. J.

    1983-12-01

    The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.

  4. Principal cells of the brainstem's interaural sound level detector are temporal differentiators rather than integrators.

    PubMed

    Franken, Tom P; Joris, Philip X; Smith, Philip H

    2018-06-14

    The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.

  5. The influence of the level formants on the perception of synthetic vowel sounds

    NASA Astrophysics Data System (ADS)

    Kubzdela, Henryk; Owsianny, Mariuz

    A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.

  6. Dynamics of unstable sound waves in a non-equilibrium medium at the nonlinear stage

    NASA Astrophysics Data System (ADS)

    Khrapov, Sergey; Khoperskov, Alexander

    2018-03-01

    A new dispersion equation is obtained for a non-equilibrium medium with an exponential relaxation model of a vibrationally excited gas. We have researched the dependencies of the pump source and the heat removal on the medium thermodynamic parameters. The boundaries of sound waves stability regions in a non-equilibrium gas have been determined. The nonlinear stage of sound waves instability development in a vibrationally excited gas has been investigated within CSPH-TVD and MUSCL numerical schemes using parallel technologies OpenMP-CUDA. We have obtained a good agreement of numerical simulation results with the linear perturbations dynamics at the initial stage of the sound waves growth caused by instability. At the nonlinear stage, the sound waves amplitude reaches the maximum value that leads to the formation of shock waves system.

  7. Development of a Korotkov sound processor for automatic identification of auscultatory events. I - Specification of preprocessing bandpass filters

    NASA Technical Reports Server (NTRS)

    Golden, D. P., Jr.; Wolthuis, R. A.; Hoffler, G. W.; Gowen, R. J.

    1974-01-01

    Frequency bands that best discriminate the Korotkov sounds at systole and at diastole from the sounds immediately preceding these events are defined. Korotkov sound data were recorded from five normotensive subjects during orthostatic stress (lower body negative pressure) and bicycle ergometry. A spectral analysis of the seven Korotkov sounds centered about the systolic and diastolic auscultatory events revealed that a maximum increase in amplitude at the systolic transition occurred in the 18-26-Hz band, while a maximum decrease in amplitude at the diastolic transition occurred in the 40-60-Hz band. These findings were remarkably consistent across subjects and test conditions. These passbands are included in the design specifications for an automatic blood pressure measuring system used in conjuction with medical experiments during NASA's Skylab program.

  8. The Quest for Quality

    ERIC Educational Resources Information Center

    Chappuis, Stephen; Chappuis, Jan; Stiggins, Rick

    2009-01-01

    Instructional decisions based on quality assessments and a balanced assessment system most effectively promote student learning. To inform sound decisions, assessments need to satisfy five key standards of quality: (1) clear purpose; (2) clear learning targets; (3) sound assessment design; (4) effective communication of results; and (5) student…

  9. Assessing sound exposure from shipping in coastal waters using a single hydrophone and Automatic Identification System (AIS) data.

    PubMed

    Merchant, Nathan D; Witt, Matthew J; Blondel, Philippe; Godley, Brendan J; Smith, George H

    2012-07-01

    Underwater noise from shipping is a growing presence throughout the world's oceans, and may be subjecting marine fauna to chronic noise exposure with potentially severe long-term consequences. The coincidence of dense shipping activity and sensitive marine ecosystems in coastal environments is of particular concern, and noise assessment methodologies which describe the high temporal variability of sound exposure in these areas are needed. We present a method of characterising sound exposure from shipping using continuous passive acoustic monitoring combined with Automatic Identification System (AIS) shipping data. The method is applied to data recorded in Falmouth Bay, UK. Absolute and relative levels of intermittent ship noise contributions to the 24-h sound exposure level are determined using an adaptive threshold, and the spatial distribution of potential ship sources is then analysed using AIS data. This technique can be used to prioritize shipping noise mitigation strategies in coastal marine environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.

    PubMed

    Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S

    2002-06-01

    Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.

  11. Inexpensive Data Acquisition with a Sound Card

    NASA Astrophysics Data System (ADS)

    Hassan, Umer; Pervaiz, Saad; Anwar, Muhammad Sabieh

    2011-12-01

    Signal generators, oscilloscopes, and data acquisition (DAQ) systems are standard components of the modern experimental physics laboratory. The sound card, a built-in component in the ubiquitous personal computer, can be utilized for all three of these tasks1,2 and offers an attractive option for labs in developing countries such as ours—Pakistan—where affordability is always of prime concern. In this paper, we describe in a recipe fashion how the sound card is used for DAQ and signal generation.

  12. Installation Restoration Program. Phase I. Records Search for the 5073rd Air Base Group, Shemya AFB, Alaska.

    DTIC Science & Technology

    1984-09-21

    O’Flaherty, R.W. Greiling, and B.J. Morson. A-2 [ . . . . . . .. _ . . . . . . . . . . .. ... . . . . DAVID W. ABBOTT EDUCATION University of Puget Sound , B.S...discovery of the largest aquifer system heretofor discovered in Kitsap County and perhaps in the Puget Sound lowlands. e City of Ellensburg, Washington...is a lead author of a report ’Dr EPA Region X in which she iden- tified major water uses within designated subregions of Puget Sound which could be

  13. A Low Cost GPS System for Real-Time Tracking of Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Markgraf, M.; Montenbruck, O.; Hassenpflug, F.; Turner, P.; Bull, B.; Bauer, Frank (Technical Monitor)

    2001-01-01

    In an effort to minimize the need for costly, complex, tracking radars, the German Space Operations Center has set up a research project for GPS based tracking of sounding rockets. As part of this project, a GPS receiver based on commercial technology for terrestrial applications has been modified to allow its use under the highly dynamical conditions of a sounding rocket flight. In addition, new antenna concepts are studied as an alternative to proven but costly wrap-around antennas.

  14. Dispersion of sound in a combustion duct by fuel droplets and soot particles

    NASA Technical Reports Server (NTRS)

    Miles, J. H.; Raftopoulos, D. D.

    1979-01-01

    Dispersion and attenuation of acoustic plane wave disturbances propagating in a ducted combustion system are studied. The dispersion and attenuation are caused by fuel droplet and soot emissions from a jet engine combustor. The attenuation and dispersion are due to heat transfer and mass transfer and viscous drag forces between the emissions and the ambient gas. Theoretical calculations show sound propagation at speeds below the isentropic speed of sound at low frequencies. Experimental results are in good agreement with the theory.

  15. Thin structured rigid body for acoustic absorption

    NASA Astrophysics Data System (ADS)

    Starkey, T. A.; Smith, J. D.; Hibbins, A. P.; Sambles, J. R.; Rance, H. J.

    2017-01-01

    We present a thin acoustic metamaterial absorber, comprised of only rigid metal and air, that gives rise to near unity absorption of airborne sound on resonance. This simple, easily fabricated, robust structure comprising a perforated metal plate separated from a rigid wall by a deeply subwavelength channel of air is an ideal candidate for a sound absorbing panel. The strong absorption in the system is attributed to the thermo-viscous losses arising from a sound wave guided between the plate and the wall, defining the subwavelength channel.

  16. Analysis of High Temporal and Spatial Observations of Hurricane Joaquin During TCI-15

    NASA Technical Reports Server (NTRS)

    Creasey, Robert; Elsberry, Russell L.; Velden, Chris; Cecil, Daniel J.; Bell, Michael; Hendricks, Eric A.

    2016-01-01

    Objectives: Provide an example of why analysis of high density soundings across Hurricane Joaquin also require highly accurate center positions; Describe technique for calculating 3-D zero-wind center positions from the highly accurate GPS positions of sequences of High-Density Sounding System (HDSS) soundings as they fall from 10 km to the ocean surface; Illustrate the vertical tilt of the vortex above 4-5 km during two center passes through Hurricane Joaquin on 4 October 2015.

  17. Evaluation of Routine Atmospheric Sounding Measurements using Unmanned Systems (ERASMUS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bland, Geoffrey

    2016-06-30

    The use of small unmanned aircraft systems (sUAS) with miniature sensor systems for atmospheric research is an important capability to develop. The Evaluation of Routine Atmospheric Sounding Measurements using Unmanned Systems (ERASMUS) project, lead by Dr. Gijs de Boer of the Cooperative Institute for Research in Environmental Sciences (CIRES- a partnership of NOAA and CU-Boulder), is a significant milestone in realizing this new potential. This project has clearly demonstrated that the concept of sUAS utilization is valid, and miniature instrumentation can be used to further our understanding of the atmospheric boundary layer in the arctic.

  18. Comparison of L-system applications towards plant modelling, music rendering and score generation using visual language programming

    NASA Astrophysics Data System (ADS)

    Lim, Chen Kim; Tan, Kian Lam; Yusran, Hazwanni; Suppramaniam, Vicknesh

    2017-10-01

    Visual language or visual representation has been used in the past few years in order to express the knowledge in graphic. One of the important graphical elements is fractal and L-Systems is a mathematic-based grammatical model for modelling cell development and plant topology. From the plant model, L-Systems can be interpreted as music sound and score. In this paper, LSound which is a Visual Language Programming (VLP) framework has been developed to model plant to music sound and generate music score and vice versa. The objectives of this research has three folds: (i) To expand the grammar dictionary of L-Systems music based on visual programming, (ii) To design and produce a user-friendly and icon based visual language framework typically for L-Systems musical score generation which helps the basic learners in musical field and (iii) To generate music score from plant models and vice versa using L-Systems method. This research undergoes a four phases methodology where the plant is first modelled, then the music is interpreted, followed by the output of music sound through MIDI and finally score is generated. LSound is technically compared to other existing applications in the aspects of the capability of modelling the plant, rendering the music and generating the sound. It has been found that LSound is a flexible framework in which the plant can be easily altered through arrow-based programming and the music score can be altered through the music symbols and notes. This work encourages non-experts to understand L-Systems and music hand-in-hand.

  19. Characteristics of gunshot sound displays by North Atlantic right whales in the Bay of Fundy.

    PubMed

    Parks, Susan E; Hotchkin, Cara F; Cortopassi, Kathryn A; Clark, Christopher W

    2012-04-01

    North Atlantic right whales (Eubalaena glacialis) produce a loud, broadband signal referred to as the gunshot sound. These distinctive sounds may be suitable for passive acoustic monitoring and detection of right whales; however, little is known about the prevalence of these sounds in important right whale habitats, such as the Bay of Fundy. This study investigates the timing and distribution of gunshot sound production on the summer feeding grounds using an array of five marine acoustic recording units deployed in the Bay of Fundy, Canada in mid-summer 2004 and 2005. Gunshot sounds were common, detected on 37 of 38 recording days. Stereotyped gunshot bouts averaged 1.5 h, with some bouts exceeding 7 h in duration with up to seven individuals producing gunshots at any one time. Bouts were more commonly detected in the late afternoon and evening than during the morning hours. Locations of gunshots in bouts indicated that whales producing the sounds were either stationary or showed directional travel, suggesting gunshots have different communication functions depending on behavioral context. These results indicate that gunshots are a common right whale sound produced during the summer months and are an important component in the acoustic communication system of this endangered species.

  20. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

Top