Sample records for second sound

  1. First and second sound in a strongly interacting Fermi gas

    NASA Astrophysics Data System (ADS)

    Taylor, E.; Hu, H.; Liu, X.-J.; Pitaevskii, L. P.; Griffin, A.; Stringari, S.

    2009-11-01

    Using a variational approach, we solve the equations of two-fluid hydrodynamics for a uniform and trapped Fermi gas at unitarity. In the uniform case, we find that the first and second sound modes are remarkably similar to those in superfluid helium, a consequence of strong interactions. In the presence of harmonic trapping, first and second sound become degenerate at certain temperatures. At these points, second sound hybridizes with first sound and is strongly coupled with density fluctuations, giving a promising way of observing second sound. We also discuss the possibility of exciting second sound by generating local heat perturbations.

  2. Second Sound in Systems of One-Dimensional Fermions

    DOE PAGES

    Matveev, K. A.; Andreev, A. V.

    2017-12-27

    We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies the second sound mode is damped, and the propagation of heat is diffusive.

  3. Second Sound in Systems of One-Dimensional Fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matveev, K. A.; Andreev, A. V.

    We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies the second sound mode is damped, and the propagation of heat is diffusive.

  4. Second Sound in Systems of One-Dimensional Fermions

    NASA Astrophysics Data System (ADS)

    Matveev, K. A.; Andreev, A. V.

    2017-12-01

    We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to the ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies, the second sound mode is damped, and the propagation of heat is diffusive.

  5. Second sound and the density response function in uniform superfluid atomic gases

    NASA Astrophysics Data System (ADS)

    Hu, H.; Taylor, E.; Liu, X.-J.; Stringari, S.; Griffin, A.

    2010-04-01

    Recently, there has been renewed interest in second sound in superfluid Bose and Fermi gases. By using two-fluid hydrodynamic theory, we review the density response χnn(q, ω) of these systems as a tool to identify second sound in experiments based on density probes. Our work generalizes the well-known studies of the dynamic structure factor S(q, ω) in superfluid 4He in the critical region. We show that, in the unitary limit of uniform superfluid Fermi gases, the relative weight of second versus first sound in the compressibility sum rule is given by the Landau-Placzek ratio \\epsilon_{\\mathrm{LP}}\\equiv (\\bar{c}_p-\\bar{c}_v)/\\bar{c}_v for all temperatures below Tc. In contrast to superfluid 4He, epsilonLP is much larger in strongly interacting Fermi gases, being already of order unity for T~0.8Tc, thereby providing promising opportunities to excite second sound with density probes. The relative weights of first and second sound are quite different in S(q, ω) (measured in pulse propagation studies) as compared with Imχnn(q, ω) (measured in two-photon Bragg scattering). We show that first and second sound in S(q, ω) in a strongly interacting Bose-condensed gas are similar to those in a Fermi gas at unitarity. However, in a weakly interacting Bose gas, first and second sound are mainly uncoupled oscillations of the thermal cloud and condensate, respectively, and second sound has most of the spectral weight in S(q, ω). We also discuss the behaviour of the superfluid and normal fluid velocity fields involved in first and second sound.

  6. Hydrodynamic phonon drift and second sound in a (20,20) single-wall carbon nanotube

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeop; Lindsay, Lucas

    2017-05-01

    Two hydrodynamic features of phonon transport, phonon drift and second sound, in a (20,20) single-wall carbon nanotube (SWCNT) are discussed using lattice dynamics calculations employing an optimized Tersoff potential for atomic interactions. We formally derive a formula for the contribution of drift motion of phonons to total heat flux at steady state. It is found that the drift motion of phonons carries more than 70 % and 90 % of heat at 300 and 100 K, respectively, indicating that phonon flow can be reasonably approximated as hydrodynamic if the SWCNT is long enough to avoid ballistic phonon transport. The dispersion relation of second sound is derived from the Peierls-Boltzmann transport equation with Callaway's scattering model and quantifies the speed of second sound and its relaxation. The speed of second sound is around 4000 m/s in a (20,20) SWCNT and the second sound can propagate more than 10 µm in an isotopically pure (20,20) SWCNT for frequency around 1 GHz at 100 K.

  7. Acceptability of VTOL aircraft noise determined by absolute subjective testing

    NASA Technical Reports Server (NTRS)

    Sternfeld, H., Jr.; Hinterkeuser, E. G.; Hackman, R. B.; Davis, J.

    1972-01-01

    A program was conducted during which test subjects evaluated the simulated sounds of a helicopter, a tilt wing aircraft, and a 15 second, 90 PNdB (indoors) turbojet aircraft used as reference. Over 20,000 evaluations were made while the test subjects were engaged in work and leisure activities. The effects of level, exposure time, distance and aircraft design on subjective acceptability were evaluated. Some of the important conclusions are: (1) To be judged equal in annoyance to the reference jet sound, the helicopter and tilt wing sounds must be 4 to 5 PNdB lower when lasting 15 seconds in duration. (2) To be judged significantly more acceptable than the reference jet sound, the helicopter sound must be 10 PNdB lower when lasting 15 seconds in duration. (3) To be judged significantly more acceptable than the reference jet sound, the tilt wing sound must be 12 PNdB lower when lasting 15 seconds in duration. (4) The relative effect of changing the duration of a sound upon its subjectively rated annoyance diminishes with increasing duration. It varies from 2 PNdB per doubling of duration for intervals of 15 to 30 seconds, to 0.75 PNdB per doubling of duration for intervals of 120 to 240 seconds.

  8. Hydrodynamic phonon drift and second sound in a (20,20) single-wall carbon nanotube

    DOE PAGES

    Lee, Sangyeop; Lindsay, Lucas

    2017-05-18

    Here, two hydrodynamic features of phonon transport, phonon drift and second sound, in a (20,20) single wall carbon nanotube (SWCNT) are discussed using lattice dynamics calculations employing an optimized Tersoff potential for atomic interactions. We formally derive a formula for the contribution of drift motion of phonons to total heat flux at steady state. It is found that the drift motion of phonons carry more than 70% and 90% of heat at 300 K and 100 K, respectively, indicating that phonon flow can be reasonably approximated as hydrodynamic if the SWCNT is long enough to avoid ballistic phonon transport. Themore » dispersion relation of second sound is derived from the Peierls-Boltzmann transport equation with Callaway s scattering model and quantifies the speed of second sound and its relaxation. The speed of second sound is around 4000 m/s in a (20,20) SWCNT and the second sound can propagate more than 10 m in an isotopically pure (20,20) SWCNT for frequency around 1 GHz at 100 K.« less

  9. Hydrodynamic phonon drift and second sound in a (20,20) single-wall carbon nanotube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangyeop; Lindsay, Lucas

    Here, two hydrodynamic features of phonon transport, phonon drift and second sound, in a (20,20) single wall carbon nanotube (SWCNT) are discussed using lattice dynamics calculations employing an optimized Tersoff potential for atomic interactions. We formally derive a formula for the contribution of drift motion of phonons to total heat flux at steady state. It is found that the drift motion of phonons carry more than 70% and 90% of heat at 300 K and 100 K, respectively, indicating that phonon flow can be reasonably approximated as hydrodynamic if the SWCNT is long enough to avoid ballistic phonon transport. Themore » dispersion relation of second sound is derived from the Peierls-Boltzmann transport equation with Callaway s scattering model and quantifies the speed of second sound and its relaxation. The speed of second sound is around 4000 m/s in a (20,20) SWCNT and the second sound can propagate more than 10 m in an isotopically pure (20,20) SWCNT for frequency around 1 GHz at 100 K.« less

  10. Propagation of second sound in a superfluid Fermi gas in the unitary limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arahata, Emiko; Nikuni, Tetsuro

    2009-10-15

    We study sound propagation in a uniform superfluid gas of Fermi atoms in the unitary limit. The existence of normal and superfluid components leads to appearance of two sound modes in the collisional regime, referred to as first and second sounds. The second sound is of particular interest as it is a clear signal of a superfluid component. Using Landau's two-fluid hydrodynamic theory, we calculate hydrodynamic sound velocities and these weights in the density response function. The latter is used to calculate the response to a sudden modification of the external potential generating pulse propagation. The amplitude of a pulsemore » which is proportional to the weight in the response function is calculated, the basis of the approach of Nozieres and Schmitt-Rink for the BCS-BEC. We show that, in a superfluid Fermi gas at unitarity, the second-sound pulse is excited with an appreciate amplitude by density perturbations.« less

  11. Smart phone monitoring of second heart sound split.

    PubMed

    Thiyagaraja, Shanti R; Vempati, Jagannadh; Dantu, Ram; Sarma, Tom; Dantu, Siva

    2014-01-01

    Heart Auscultation (listening to heart sounds) is the basic element of cardiac diagnosis. The interpretation of these sounds is a difficult skill to acquire. In this work we have developed an application to detect, monitor, and analyze the split in second heart sound (S2) using a smart phone. The application records the heartbeat using a stethoscope connected to the smart phone. The audio signal is converted into the frequency domain using Fast Fourier Transform to detect the first and second heart sounds (S1 and S2). S2 is extracted and fed into the Discrete Wavelet Transform (DWT) and then to Continuous Wavelet Transform (CWT) to detect the Aortic (A2) and the Pulmonic (P2) components, which are used to calculate the split in S2. With our application, users can continuously monitor their second heart sound irrespective of ages and check for a split in their hearts with a low-cost, easily available equipment.

  12. Development of an ICT-Based Air Column Resonance Learning Media

    NASA Astrophysics Data System (ADS)

    Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut

    2016-08-01

    Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.

  13. Second sound shock waves and critical velocities in liquid helium 2. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Turner, T. N.

    1979-01-01

    Large amplitude second-sound shock waves were generated and the experimental results compared to the theory of nonlinear second-sound. The structure and thickness of second-sound shock fronts are calculated and compared to experimental data. Theoretically it is shown that at T = 1.88 K, where the nonlinear wave steepening vanishes, the thickness of a very weak shock must diverge. In a region near this temperature, a finite-amplitude shock pulse evolves into an unusual double-shock configuration consisting of a front steepened, temperature raising shock followed by a temperature lowering shock. Double-shocks are experimentally verified. It is experimentally shown that very large second-sound shock waves initiate a breakdown in the superfluidity of helium 2, which is dramatically displayed as a limit to the maximum attainable shock strength. The value of the maximum shock-induced relative velocity represents a significant lower bound to the intrinsic critical velocity of helium 2.

  14. First and second sound in a two-dimensional harmonically trapped Bose gas across the Berezinskii–Kosterlitz–Thouless transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xia-Ji, E-mail: xiajiliu@swin.edu.au; Hu, Hui, E-mail: hhu@swin.edu.au

    2014-12-15

    We theoretically investigate first and second sound of a two-dimensional (2D) atomic Bose gas in harmonic traps by solving Landau’s two-fluid hydrodynamic equations. For an isotropic trap, we find that first and second sound modes become degenerate at certain temperatures and exhibit typical avoided crossings in mode frequencies. At these temperatures, second sound has significant density fluctuation due to its hybridization with first sound and has a divergent mode frequency towards the Berezinskii–Kosterlitz–Thouless (BKT) transition. For a highly anisotropic trap, we derive the simplified one-dimensional hydrodynamic equations and discuss the sound-wave propagation along the weakly confined direction. Due to themore » universal jump of the superfluid density inherent to the BKT transition, we show that the first sound velocity exhibits a kink across the transition. These predictions might be readily examined in current experimental setups for 2D dilute Bose gases with a sufficiently large number of atoms, where the finite-size effect due to harmonic traps is relatively weak.« less

  15. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  16. Second harmonic sound field after insertion of a biological tissue sample

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Gong, Xiu-Fen; Zhang, Bo

    2002-01-01

    Second harmonic sound field after inserting a biological tissue sample is investigated by theory and experiment. The sample is inserted perpendicular to the sound axis, whose acoustical properties are different from those of surrounding medium (distilled water). By using the superposition of Gaussian beams and the KZK equation in quasilinear and parabolic approximations, the second harmonic field after insertion of the sample can be derived analytically and expressed as a linear combination of self- and cross-interaction of the Gaussian beams. Egg white, egg yolk, porcine liver, and porcine fat are used as the samples and inserted in the sound field radiated from a 2 MHz uniformly excited focusing source. Axial normalized sound pressure curves of the second harmonic wave before and after inserting the sample are measured and compared with the theoretical results calculated with 10 items of Gaussian beam functions.

  17. Probing the critical exponent of the superfluid fraction in a strongly interacting Fermi gas

    NASA Astrophysics Data System (ADS)

    Hu, Hui; Liu, Xia-Ji

    2013-11-01

    We theoretically investigate the critical behavior of a second-sound mode in a harmonically trapped ultracold atomic Fermi gas with resonant interactions. Near the superfluid phase transition with critical temperature Tc, the frequency or the sound velocity of the second-sound mode crucially depends on the critical exponent β of the superfluid fraction. In an isotropic harmonic trap, we predict that the mode frequency diverges like (1-T/Tc)β-1/2 when β<1/2. In a highly elongated trap, the speed of the second sound reduces by a factor of 1/2β+1 from that in a homogeneous three-dimensional superfluid. Our prediction could readily be tested by measurements of second-sound wave propagation in a setup, such as that exploited by Sidorenkov [Nature (London)NATUAS0028-083610.1038/nature12136 498, 78 (2013)] for resonantly interacting lithium-6 atoms, once the experimental precision is improved.

  18. Second Sound Measurements Very Near the Lambda Point

    NASA Technical Reports Server (NTRS)

    Adriaans, M.; Lipa, J.

    1999-01-01

    The sound was generated by wire-wound heaters embedded in the end opposite the sensor in each cavity. The superfluid density was determined from second sound measurements and the critical exponent v was obtained from fits to the data. The results from the exponent were found to be very sensitive to the treatment of systematic effects in the data.

  19. Second sound tracking system

    NASA Astrophysics Data System (ADS)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  20. Acoustic transducer in system for gas temperature measurement in gas turbine engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul P.; Claussen, Heiko

    An apparatus for controlling operation of a gas turbine engine including at least one acoustic transmitter/receiver device located on a flow path boundary structure. The acoustic transmitter/receiver device includes an elongated sound passage defined by a surface of revolution having opposing first and second ends and a central axis extending between the first and second ends, an acoustic sound source located at the first end, and an acoustic receiver located within the sound passage between the first and second ends. The boundary structure includes an opening extending from outside the boundary structure to the flow path, and the second endmore » of the surface of revolution is affixed to the boundary structure at the opening for passage of acoustic signals between the sound passage and the flow path.« less

  1. 49 CFR Appendix E to Part 222 - Requirements for Wayside Horns

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., indicates that the system is not operating as intended; 4. Horn system must provide a minimum sound level of... locomotive engineer to sound the locomotive horn for at least 15 seconds prior to arrival at the crossing in...; 5. Horn system must sound at a minimum of 15 seconds prior to the train's arrival at the crossing...

  2. 49 CFR Appendix E to Part 222 - Requirements for Wayside Horns

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., indicates that the system is not operating as intended; 4. Horn system must provide a minimum sound level of... locomotive engineer to sound the locomotive horn for at least 15 seconds prior to arrival at the crossing in...; 5. Horn system must sound at a minimum of 15 seconds prior to the train's arrival at the crossing...

  3. Cultural Conceptualisations in Learning English as an L2: Examples from Persian-Speaking Learners

    ERIC Educational Resources Information Center

    Sharifian, Farzad

    2013-01-01

    Traditionally, many studies of second language acquisition (SLA) were based on the assumption that learning a new language mainly involves learning a set of grammatical rules, lexical items, and certain new sounds and sound combinations. However, for many second language learners, learning a second language may involve contact and interactions…

  4. Cascaded Amplitude Modulations in Sound Texture Perception

    PubMed Central

    McWalter, Richard; Dau, Torsten

    2017-01-01

    Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches. PMID:28955191

  5. Cascaded Amplitude Modulations in Sound Texture Perception.

    PubMed

    McWalter, Richard; Dau, Torsten

    2017-01-01

    Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as "beating" in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures-stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.

  6. 49 CFR 227.5 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... second or less. Decibel (dB) means a unit of measurement of sound pressure levels. dB(A) means the sound... operate similar equipment under similar conditions. Sound level or Sound pressure level means ten times... an eight-hour time-weighted-average sound level (TWA) of 85 dB(A), or, equivalently, a dose of 50...

  7. Sound-velocity measurements for HFC-134a and HFC-152a with a spherical resonator

    NASA Astrophysics Data System (ADS)

    Hozumi, T.; Koga, T.; Sato, H.; Watanabe, K.

    1993-07-01

    A spherical acoustic resonator was developed for measuring sound velocities in the gaseous phase and ideal-gas specific heats for new refrigerants. The radius of the spherical resonator, being about 5 cm, was determined by measuring sound velocities in gaseous argon at temperatures from 273 to 348 K and pressures up to 240 kPa. The measurements of 23 sound velocities in gaseous HFC-134a (1,1,1,2-tetrafluoroethane) at temperatures of 273 and 298 K and pressures from 10 to 250 kPa agree well with the measurements of Goodwin and Moldover. In addition, 92 sound velocities in gaseous HFC-152a (1,1-difluoroethane) with an accuracy of ±0.01% were measured at temperatures from 273 to 348 K and pressures up to 250 kPa. The ideal-gas specific heats as well as the second acoustic virial coefficients have been obtained for both these important alternative refrigerants. The second virial coefficients for HFC-152a derived from the present sound velocity measurements agree extremely well with the reported second virial coefficient values obtained with a Burnett apparatus.

  8. On Sound Reflection in Superfluid

    NASA Astrophysics Data System (ADS)

    Melnikovsky, L. A.

    2008-02-01

    We consider reflection of first and second sound waves by a rigid flat wall in superfluid. A nontrivial dependence of the reflection coefficients on the angle of incidence is obtained. Sound conversion is predicted at slanted incidence.

  9. Speech-sound duration processing in a second language is specific to phonetic categories.

    PubMed

    Nenonen, Sari; Shestakova, Anna; Huotilainen, Minna; Näätänen, Risto

    2005-01-01

    The mismatch negativity (MMN) component of the auditory event-related potential was used to determine the effect of native language, Russian, on the processing of speech-sound duration in a second language, Finnish, that uses duration as a cue for phonological distinction. The native-language effect was compared with Finnish vowels that either can or cannot be categorized using the Russian phonological system. The results showed that the duration-change MMN for the Finnish sounds that could be categorized through Russian was reduced in comparison with that for the Finnish sounds having no Russian equivalent. In the Finnish sounds that can be mapped through the Russian phonological system, the facilitation of the duration processing may be inhibited by the native Russian language. However, for the sounds that have no Russian equivalent, new vowel categories independent of the native Russian language have apparently been established, enabling a native-like duration processing of Finnish.

  10. Neuromimetic Sound Representation for Percept Detection and Manipulation

    NASA Astrophysics Data System (ADS)

    Zotkin, Dmitry N.; Chi, Taishih; Shamma, Shihab A.; Duraiswami, Ramani

    2005-12-01

    The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at [InlineEquation not available: see fulltext.]). Work on bringing the algorithms into the real-time processing domain is ongoing.

  11. Multiple response to sound in dysfunctional children.

    PubMed

    Condon, W S

    1975-03-01

    Methods and findings derived from over a decade of linguistic-kinesic microanalysis of sound films of human behavior were appled to the analysis of sound films of 25 dysfunctional children. Of the children, 17 were markedly dysfunctional (autistic-like) while 8 had milder reading problems. All of these children appeared to respond to sound more than once: when it actually occurred and again after a delay ranging from a fraction of a second up to a full second, depending on the child. Most of the children did not seem to actually hear the sound more than once; however, there is some indication that a few children may have done so. Evidence was also found suggesting a continuum from the longer delay of autistic-like children to the briefer delay of children with reading problems.

  12. Sound absorption study on acoustic panel from kapok fiber and egg tray

    NASA Astrophysics Data System (ADS)

    Kaamin, Masiri; Mahir, Nurul Syazwani Mohd; Kadir, Aslila Abd; Hamid, Nor Baizura; Mokhtar, Mardiha; Ngadiman, Norhayati

    2017-12-01

    Noise also known as a sound, especially one that is loud or unpleasant or that causes disruption. The level of noise can be reduced by using sound absorption panel. Currently, the market produces sound absorption panel, which use synthetic fibers that can cause harmful effects to the health of consumers. An awareness of using natural fibers from natural materials gets attention of some parties to use it as a sound absorbing material. Therefore, this study was conducted to investigate the potential of sound absorption panel using egg trays and kapok fibers. The test involved in this study was impedance tube test which aims to get sound absorption coefficient (SAC). The results showed that there was good sound absorption at low frequency from 0 Hz up to 900 Hz where the maximum absorption coefficient was 0.950 while the maximum absorption at high frequencies was 0.799. Through the noise reduction coefficient (NRC), the material produced NRC of 0.57 indicates that the materials are very absorbing. In addition, the reverberation room test was carried out to get the value of reverberation time (RT) in unit seconds. Overall this panel showed good results at low frequencies between 0 Hz up to 1500 Hz. In that range of frequency, the maximum reverberation time for the panel was 3.784 seconds compared to the maximum reverberation time for an empty room was 5.798 seconds. This study indicated that kapok fiber and egg tray as the material of absorption panel has a potential as environmental and cheap products in absorbing sound at low frequency.

  13. 10 Hz Amplitude Modulated Sounds Induce Short-Term Tinnitus Suppression

    PubMed Central

    Neff, Patrick; Michels, Jakob; Meyer, Martin; Schecklmann, Martin; Langguth, Berthold; Schlee, Winfried

    2017-01-01

    Objectives: Acoustic stimulation or sound therapy is proposed as a main treatment option for chronic subjective tinnitus. To further probe the field of acoustic stimulations for tinnitus therapy, this exploratory study compared 10 Hz amplitude modulated (AM) sounds (two pure tones, noise, music, and frequency modulated (FM) sounds) and unmodulated sounds (pure tone, noise) regarding their temporary suppression of tinnitus loudness. First, it was hypothesized that modulated sounds elicit larger temporary loudness suppression (residual inhibition) than unmodulated sounds. Second, with manipulation of stimulus loudness and duration of the modulated sounds weaker or stronger effects of loudness suppression were expected, respectively. Methods: We recruited 29 participants with chronic tonal tinnitus from the multidisciplinary Tinnitus Clinic of the University of Regensburg. Participants underwent audiometric, psychometric and tinnitus pitch matching assessments followed by an acoustic stimulation experiment with a tinnitus loudness growth paradigm. In a first block participants were stimulated with all of the sounds for 3 min each and rated their subjective tinnitus loudness to the pre-stimulus loudness every 30 s after stimulus offset. The same procedure was deployed in the second block with the pure tone AM stimuli matched to the tinnitus frequency, manipulated in length (6 min), and loudness (reduced by 30 dB and linear fade out). Repeated measures mixed model analyses of variance (ANOVA) were calculated to assess differences in loudness growth between the stimuli for each block separately. Results: First, we found that all sounds elicit a short-term suppression of tinnitus loudness (seconds to minutes) with strongest suppression right after stimulus offset [F(6, 1331) = 3.74, p < 0.01]. Second, similar to previous findings we found that AM sounds near the tinnitus frequency produce significantly stronger tinnitus loudness suppression than noise [vs. Pink noise: t(27) = −4.22, p < 0.0001]. Finally, variants of the AM sound matched to the tinnitus frequency reduced in sound level resulted in less suppression while there was no significant difference observed for a longer stimulation duration. Moreover, feasibility of the overall procedure could be confirmed as scores of both tinnitus loudness and questionnaires were lower after the experiment [tinnitus loudness: t(27) = 2.77, p < 0.01; Tinnitus Questionnaire: t(27) = 2.06, p < 0.05; Tinnitus Handicap Inventory: t(27) = 1.92, p = 0.065]. Conclusion: Taken together, these results imply that AM sounds, especially in or around the tinnitus frequency, may induce larger suppression than unmodulated sounds. Future studies should thus evaluate this approach in longitudinal studies and real life settings. Furthermore, the putative neural relation of these sound stimuli with a modulation rate in the EEG α band to the observed tinnitus suppression should be probed with respective neurophysiological methods. PMID:28579955

  14. Elementary Yoruba: Sound Drills and Greetings. Occasional Publication No. 18.

    ERIC Educational Resources Information Center

    Armstrong, Robert G.; Awujoola, Robert L.

    This introduction to elementary Yoruba is divided into two parts. The first section is on sound drills, and the second section concerns Yoruba greetings. The first part includes exercises to enable the student to master the Yoruba sound system. Emphasis is on pronunciation and recognition of the sounds and tones, but not memorization. A tape is…

  15. Method and apparatus for inspecting conduits

    DOEpatents

    Spisak, Michael J.; Nance, Roy A.

    1997-01-01

    An apparatus and method for ultrasonic inspection of a conduit are provided. The method involves directing a first ultrasonic pulse at a particular area of the conduit at a first angle, receiving the reflected sound from the first ultrasonic pulse, substantially simultaneously or subsequently in very close time proximity directing a second ultrasonic pulse at said area of the conduit from a substantially different angle than said first angle, receiving the reflected sound from the second ultrasonic pulse, and comparing the received sounds to determine if there is a defect in that area of the conduit. The apparatus of the invention is suitable for carrying out the above-described method. The method and apparatus of the present invention provide the ability to distinguish between sounds reflected by defects in a conduit and sounds reflected by harmless deposits associated with the conduit.

  16. Drift and geodesic effects on the ion sound eigenmode in tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elfimov, A. G., E-mail: elfimov@if.usp.br; Smolyakov, A. I., E-mail: andrei.smolyakov@usask.ca; Melnikov, A. V.

    A kinetic treatment of geodesic acoustic modes (GAMs), taking into account ion parallel dynamics, drift and the second poloidal harmonic effects is presented. It is shown that first and second harmonics of the ion sound modes, which have respectively positive and negative radial dispersion, can be coupled due to the geodesic and drift effects. This coupling results in the drift geodesic ion sound eigenmode with a frequency below the standard GAM continuum frequency. Such eigenmode may be able to explain the split modes observed in some experiments.

  17. Sound propagation in light-modulated carbon nanosponge suspensions

    NASA Astrophysics Data System (ADS)

    Zhou, W.; Tiwari, R. P.; Annamalai, R.; Sooryakumar, R.; Subramaniam, V.; Stroud, D.

    2009-03-01

    Single-walled carbon nanotube bundles dispersed in a highly polar fluid are found to agglomerate into a porous structure when exposed to low levels of laser radiation. The phototunable nanoscale porous structures provide an unusual way to control the acoustic properties of the suspension. Despite the high sound speed of the nanotubes, the measured speed of longitudinal-acoustic waves in the suspension decreases sharply with increasing bundle concentration. Two possible explanations for this reduction in sound speed are considered. One is simply that the sound speed decreases because of fluid heat induced by laser light absorption by the carbon nanotubes. The second is that this decrease results from the smaller sound velocity of fluid confined in a porous medium. Using a simplified description of convective heat transport, we estimate that the increase in temperature is too small to account for the observed decrease in sound velocity. To test the second possible explanation, we calculate the sound velocity in a porous medium, using a self-consistent effective-medium approximation. The results of this calculation agree qualitatively with experiment. In this case, the observed sound wave would be the analog of the slow compressional mode of porous solids at a structural length scale of order of 100 nm.

  18. Speech-Sound Duration Processing in a Second Language is Specific to Phonetic Categories

    ERIC Educational Resources Information Center

    Nenonen, Sari; Shestakova, Anna; Huotilainen, Minna; Naatanen, Risto

    2005-01-01

    The mismatch negativity (MMN) component of the auditory event-related potential was used to determine the effect of native language, Russian, on the processing of speech-sound duration in a second language, Finnish, that uses duration as a cue for phonological distinction. The native-language effect was compared with Finnish vowels that either can…

  19. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  20. Leak locating microphone, method and system for locating fluid leaks in pipes

    DOEpatents

    Kupperman, David S.; Spevak, Lev

    1994-01-01

    A leak detecting microphone inserted directly into fluid within a pipe includes a housing having a first end being inserted within the pipe and a second opposed end extending outside the pipe. A diaphragm is mounted within the first housing end and an acoustic transducer is coupled to the diaphragm for converting acoustical signals to electrical signals. A plurality of apertures are provided in the housing first end, the apertures located both above and below the diaphragm, whereby to equalize fluid pressure on either side of the diaphragm. A leak locating system and method are provided for locating fluid leaks within a pipe. A first microphone is installed within fluid in the pipe at a first selected location and sound is detected at the first location. A second microphone is installed within fluid in the pipe at a second selected location and sound is detected at the second location. A cross-correlation is identified between the detected sound at the first and second locations for identifying a leak location.

  1. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    NASA Astrophysics Data System (ADS)

    Perelomova, Anna

    2006-08-01

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,ρ) and caloric e(p,ρ) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed.

  2. Effect of ultrasonic cavitation on measurement of sound pressure using hydrophone

    NASA Astrophysics Data System (ADS)

    Thanh Nguyen, Tam; Asakura, Yoshiyuki; Okada, Nagaya; Koda, Shinobu; Yasuda, Keiji

    2017-07-01

    Effect of ultrasonic cavitation on sound pressure at the fundamental, second harmonic, and first ultraharmonic frequencies was investigated from low to high ultrasonic intensities. The driving frequencies were 22, 304, and 488 kHz. Sound pressure was measured using a needle-type hydrophone and ultrasonic cavitation was estimated from the broadband integrated pressure (BIP). With increasing square root of electric power applied to a transducer, the sound pressure at the fundamental frequency linearly increased initially, dropped at approximately the electric power of cavitation inception, and afterward increased again. The sound pressure at the second harmonic frequency was detected just below the electric power of cavitation inception. The first ultraharmonic component appeared at around the electric power of cavitation inception at 304 and 488 kHz. However, at 22 kHz, the first ultraharmonic component appeared at a higher electric power than that of cavitation inception.

  3. APPLIED AUDIOLOGY FOR CHILDREN. SECOND EDITION.

    ERIC Educational Resources Information Center

    DALE, D.M.C.

    THE PURPOSE OF THE BOOK IS TO HELP TEACHERS, PARENTS, DOCTORS, AND WORKERS IN AUDIOLOGY CLINICS MAKE THE MOST OF SOUND IN THE EDUCATIONAL AND SOCIAL TREATMENT OF DEAFNESS. ASPECTS OF SOUND AMPLIFICATION CONSIDERED ARE THE NATURE OF SOUND, ELECTRICAL AMPLIFICATION, AND VARIOUS TYPES OF HEARING AIDS (INDIVIDUAL, GROUP, INDUCTION LOOP, SPEECH…

  4. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    PubMed

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  5. Cochlear implant

    MedlinePlus

    ... stimulator, which accepts, decodes, and then sends an electrical signal to the brain. The second part of ... receives the sound, converts the sound into an electrical signal, and sends it to the inside part ...

  6. Bubble dynamics in drinks

    NASA Astrophysics Data System (ADS)

    Broučková, Zuzana; Trávníček, Zdeněk; Šafařík, Pavel

    2014-03-01

    This study introduces two physical effects known from beverages: the effect of sinking bubbles and the hot chocolate sound effect. The paper presents two simple "kitchen" experiments. The first and second effects are indicated by means of a flow visualization and microphone measurement, respectively. To quantify the second (acoustic) effect, sound records are analyzed using time-frequency signal processing, and the obtained power spectra and spectrograms are discussed.

  7. Standing Sound Waves in Air with DataStudio

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2010-01-01

    Two experiments related to standing sound waves in air are adapted for using the ScienceWorkshop data-acquisition system with the DataStudio software from PASCO scientific. First, the standing waves are created by reflection from a plane reflector. The distribution of the sound pressure along the standing wave is measured. Second, the resonance…

  8. UXO Detection and Characterization in the Marine Environment

    DTIC Science & Technology

    2008-11-01

    65 APPENDIX B – NAD Puget Sound Historical Documents ............................................CD APPENDIX C...16 18. A part of Ostrich Bay adjacent to the Naval Ammunition Depot Puget Sound is shown during...viii EXECUTIVE SUMMARY The second demonstration of the Marine Towed Array took place June 12-30, 2006 on Ostrich Bay ( Puget Sound ) in the state

  9. Detection of the valvular split within the second heart sound using the reassigned smoothed pseudo Wigner–Ville distribution

    PubMed Central

    2013-01-01

    Background In this paper, we developed a novel algorithm to detect the valvular split between the aortic and pulmonary components in the second heart sound which is a valuable medical information. Methods The algorithm is based on the Reassigned smoothed pseudo Wigner–Ville distribution which is a modified time–frequency distribution of the Wigner–Ville distribution. A preprocessing amplitude recovery procedure is carried out on the analysed heart sound to improve the readability of the time–frequency representation. The simulated S2 heart sounds were generated by an overlapping frequency modulated chirp–based model at different valvular split durations. Results Simulated and real heart sounds are processed to highlight the performance of the proposed approach. The algorithm is also validated on real heart sounds of the LGB–IRCM (Laboratoire de Génie biomédical–Institut de recherches cliniques de Montréal) cardiac valve database. The A2–P2 valvular split is accurately detected by processing the obtained RSPWVD representations for both simulated and real data. PMID:23631738

  10. Apparatus and method for suppressing sound in a gas turbine engine powerplant

    NASA Technical Reports Server (NTRS)

    Wynosky, Thomas A. (Inventor); Mischke, Robert J. (Inventor)

    1992-01-01

    A method and apparatus for suppressing jet noise in a gas turbine engine powerplant 10 is disclosed. Various construction details are developed for providing sound suppression at sea level take-off operative conditions and not providing sound suppression at cruise operative conditions. In one embodiment, the powerplant 10 has a lobed mixer 152 between a primary flowpath 44 and a second flowpath 46, a diffusion region downstream of the lobed mixer region (first mixing region 76), and a deployable ejector/mixer 176 in the diffusion region which forms a second mixing region 78 having a diffusion flowpath 72 downstream of the ejector/mixer and sound absorbing structure 18 bounding the flowpath throughout the diffusion region. The method includes deploying the ejector/mixer 176 at take-off and stowing the ejector/mixer at cruise.

  11. Sound speed measurements in liquid oxygen-liquid nitrogen mixtures

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.; Mazel, D. S.

    1985-01-01

    The sound speed in liquid oxygen (LOX), liquid nitrogen (LN2), and five LOX-LN2 mixtures was measured by an ultrasonic pulse-echo technique at temperatures in the vicinity of -195.8C, the boiling point of N2 at a pressure of I atm. Under these conditions, the measurements yield the following relationship between sound speed in meters per second and LN2 content M in mole percent: c = 1009.05-1.8275M+0.0026507 M squared. The second speeds of 1009.05 m/sec plus or minus 0.25 percent for pure LOX and 852.8 m/sec plus or minus 0.32 percent for pure LN2 are compared with those reported by past investigators. Measurement of sound speed should prove an effective means for monitoring the contamination of LOX by Ln2.

  12. THE USE OF ARCHITECTURAL ACOUSTICAL MATERIALS, THEORY AND PRACTICE. SECOND EDITION.

    ERIC Educational Resources Information Center

    Acoustical Materials Association, New York, NY.

    THIS DISCUSSION OF THE BASIC FUNCTION OF ACOUSTICAL MATERIALS--THE CONTROL OF SOUND BY SOUND ABSORPTION--IS BASED ON THE WAVE AND ENERGY PROPERTIES OF SOUND. IT IS STATED THAT, IN GENERAL, A MUCH LARGER VOLUME OF ACOUSTICAL MATERIALS IS NEEDED TO REMOVE DISTRACTING NOISE FROM CLASSROOMS AND OFFICES, FOR EXAMPLE, THAN FROM AUDITORIUMS, WHERE A…

  13. Unattended Exposure to Components of Speech Sounds Yields Same Benefits as Explicit Auditory Training

    ERIC Educational Resources Information Center

    Seitz, Aaron R.; Protopapas, Athanassios; Tsushima, Yoshiaki; Vlahou, Eleni L.; Gori, Simone; Grossberg, Stephen; Watanabe, Takeo

    2010-01-01

    Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception,…

  14. 49 CFR 210.31 - Operation standards (stationary locomotives at 30 meters).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... prescribed in paragraph (a)(2) of this section, the A-weighted sound level reading in decibels shall be... A-weighted sound level reading in decibels that is observed during the 30-second period of time... test; (3) Date of test; and (4) The A-weighted sound level reading in decibels obtained during the...

  15. a New Approach to Physiologic Triggering in Medical Imaging Using Multiple Heart Sounds Alone.

    NASA Astrophysics Data System (ADS)

    Groch, Mark Walter

    A new method for physiological synchronization of medical image acquisition using both the first and second heart sound has been developed. Heart sounds gating (HSG) circuitry has been developed which identifies, individually, both the first (S1) and second (S2) heart sounds from their timing relationship alone, and provides two synchronization points during the cardiac cycle. Identification of first and second heart sounds from their timing relationship alone and application to medical imaging has, heretofore, not been performed in radiology or nuclear medicine. The heart sounds are obtained as conditioned analog signals from a piezoelectric transducer microphone placed on the patient's chest. The timing relationships between the S1 to S2 pulses and the S2 to S1 pulses are determined using a logic scheme capable of distinguishing the S1 and S2 pulses from the heart sounds themselves, using their timing relationships, and the assumption that initially the S1-S2 interval will be shorter than the S2-S1 interval. Digital logic circuitry is utilized to continually track the timing intervals and extend the S1/S2 identification to heart rates up to 200 beats per minute (where the S1-S2 interval is not shorter than the S2-S1 interval). Clinically, first heart sound gating may be performed to assess the systolic ejection portion of the cardiac cycle, with S2 gating utilized for reproduction of the diastolic filling portion of the cycle. One application of HSG used for physiologic synchronization is in multigated blood pool (MGBP) imaging in nuclear medicine. Heart sounds gating has been applied to twenty patients who underwent analysis of ventricular function in Nuclear Medicine, and compared to conventional ECG gated MGBP. Left ventricular ejection fractions calculated from MGBP studies using a S1 and a S2 heart sound trigger correlated well with conventional ECG gated acquisitions in patients adequately gated by HSG and ECG. Heart sounds gating provided superior definition of the diastolic filling phase of the cardiac cycle by qualitative assessment of the left ventricular volume time -activity curves. Heart sounds physiological synchronization has potential to be used in other imaging modalities, such as magnetic resonance imaging, where the ECG is distorted due to the electromagnetic environment within the imager.

  16. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  17. Is 1/f sound more effective than simple resting in reducing stress response?

    PubMed

    Oh, Eun-Joo; Cho, Il-Young; Park, Soon-Kwon

    2014-01-01

    It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response.

  18. Auditory-Cortex Short-Term Plasticity Induced by Selective Attention

    PubMed Central

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki

    2014-01-01

    The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458

  19. Onomatopeya, Derivacion y el Sufijo -azo. (Onomatopeia, Derivation, and the Suffix -azo).

    ERIC Educational Resources Information Center

    Corro, Raymond L.

    1985-01-01

    The nature and source of onomatopeic words in Spanish are discussed in order of decreasing resemblance to the sound imitated. The first group of onomatopeic words are the interjections, in which sound effects and animal sounds are expressed. Repetition is often used to enhance the effect. The second group includes verbs and nouns derived from the…

  20. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    PubMed

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location

    PubMed Central

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-01-01

    Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808

  2. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.

    PubMed

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-08-01

    Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.

  3. CLIVAR Mode Water Dynamics Experiment (CLIMODE), Fall 2006 R/V Oceanus Voyage 434, November 16, 2006-December 3, 2006

    DTIC Science & Technology

    2007-12-01

    except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November

  4. Konstantinov effect in helium II

    NASA Astrophysics Data System (ADS)

    Melnikovsky, L. A.

    2008-04-01

    The reflection of first and second sound waves by a rigid flat wall in helium II is considered. A nontrivial dependence of the reflection coefficients on the angle of incidence is obtained. Sound conversion is predicted at oblique incidence.

  5. Variation in effectiveness of a cardiac auscultation training class with a cardiology patient simulator among heart sounds and murmurs.

    PubMed

    Kagaya, Yutaka; Tabata, Masao; Arata, Yutaro; Kameoka, Junichi; Ishii, Seiichi

    2017-08-01

    Effectiveness of simulation-based education in cardiac auscultation training is controversial, and may vary among a variety of heart sounds and murmurs. We investigated whether a single auscultation training class using a cardiology patient simulator for medical students provides competence required for clinical clerkship, and whether students' proficiency after the training differs among heart sounds and murmurs. A total of 324 fourth-year medical students (93-117/year for 3 years) were divided into groups of 6-8 students; each group participated in a three-hour training session using a cardiology patient simulator. After a mini-lecture and facilitated training, each student took two different tests. In the first test, they tried to identify three sounds of Category A (non-split, respiratory split, and abnormally wide split S2s) in random order, after being informed that they were from Category A. They then did the same with sounds of Category B (S3, S4, and S3+S4) and Category C (four heart murmurs). In the second test, they tried to identify only one from each of the three categories in random order without any category information. The overall accuracy rate declined from 80.4% in the first test to 62.0% in the second test (p<0.0001). The accuracy rate of all the heart murmurs was similar in the first (81.3%) and second tests (77.5%). That of all the heart sounds (S2/S3/S4) decreased from 79.9% to 54.3% in the second test (p<0.0001). The individual accuracy rate decreased in the second test as compared with the first test in all three S2s, S3, and S3+S4 (p<0.0001). Medical students may be less likely to correctly identify S2/S3/S4 as compared with heart murmurs in a situation close to clinical setting even immediately after training. We may have to consider such a characteristic of students when we provide them with cardiac auscultation training. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  6. A novel method for pediatric heart sound segmentation without using the ECG.

    PubMed

    Sepehri, Amir A; Gharehbaghi, Arash; Dutoit, Thierry; Kocharian, Armen; Kiani, A

    2010-07-01

    In this paper, we propose a novel method for pediatric heart sounds segmentation by paying special attention to the physiological effects of respiration on pediatric heart sounds. The segmentation is accomplished in three steps. First, the envelope of a heart sounds signal is obtained with emphasis on the first heart sound (S(1)) and the second heart sound (S(2)) by using short time spectral energy and autoregressive (AR) parameters of the signal. Then, the basic heart sounds are extracted taking into account the repetitive and spectral characteristics of S(1) and S(2) sounds by using a Multi-Layer Perceptron (MLP) neural network classifier. In the final step, by considering the diastolic and systolic intervals variations due to the effect of a child's respiration, a complete and precise heart sounds end-pointing and segmentation is achieved. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  7. Influence of the steady background turbulence level on second sound dynamics in He II II

    NASA Astrophysics Data System (ADS)

    Dalban-Canassy, M.; Hilton, D. K.; Sciver, S. W. Van

    2007-01-01

    We report complementary results to our previous publication [Dalban-Canassy M, Hilton DK, Van Sciver SW. Influence of the steady background turbulence level on second sound dynamics in He II. Adv Cryo Eng 2006;51:371-8], both of which are aimed at determining the influence of background turbulence on the breakpoint energy of second sound pulses in He II. The apparatus consists of a channel 175 mm long and 242 mm 2 in cross section immersed in a saturated bath of He II at 1.7 K. A heater at the bottom end generates both background turbulence, through a low level steady heat flux (up to qs = 2.6 kW/m 2), and high intensity square second sound pulses ( qp = 100 or 200 kW/m 2) of variable duration Δ t0 (up to 1 ms). Two superconducting filament sensors, located 25.4 mm and 127 mm above the heater, measure the temperature profiles of the traveling pulses. We present here an analysis of the measurements gathered on the top sensor, and compare them to similar results for the bottom sensor [1]. The strong dependence of the breakpoint energy on the background heat flux previously illustrated is also observed on the top sensor. The present work shows that the ratio of energy received at the top sensor to that at the bottom sensor diminishes with increasing background heat flux.

  8. Quench dynamics in SRF cavities: can we locate the quench origin with 2nd sound?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maximenko, Yulia; /Moscow, MIPT; Segatskov, Dmitri A.

    2011-03-01

    A newly developed method of locating quenches in SRF cavities by detecting second-sound waves has been gaining popularity in SRF laboratories. The technique is based on measurements of time delays between the quench as determined by the RF system and arrival of the second-sound wave to the multiple detectors placed around the cavity in superfluid helium. Unlike multi-channel temperature mapping, this approach requires only a few sensors and simple readout electronics; it can be used with SRF cavities of almost arbitrary shape. One of its drawbacks is that being an indirect method it requires one to solve an inverse problemmore » to find the location of a quench. We tried to solve this inverse problem by using a parametric forward model. By analyzing the data we found that the approximation where the second-sound emitter is a near-singular source does not describe the physical system well enough. A time-dependent analysis of the quench process can help us to put forward a more adequate model. We present here our current algorithm to solve the inverse problem and discuss the experimental results.« less

  9. Simplified Rotation In Acoustic Levitation

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.

    1989-01-01

    New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.

  10. Design and development of second order MEMS sound pressure gradient sensor

    NASA Astrophysics Data System (ADS)

    Albahri, Shehab

    The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell layered analysis method in ANSYS. The developed models simulate the fabricated chips at different stages; with the stress at each layer is introduced using thermal loading. The results indicate a new fabrication process flow to increase the rigidity of the composite layers, and countering the deformation caused by the high stress in the thermal oxide layer.

  11. Time-frequency characterisation of paediatric heart sounds

    NASA Astrophysics Data System (ADS)

    Leung, Terence Sze-Tat

    1998-08-01

    The operation of the heart can be monitored by the sounds it emits. Structural defects or malfunction of the heart valves will cause additional abnormal sounds such as murmurs and ejection clicks. This thesis aims to characterise the heart sounds of three groups of children who either have an Atrial Septal Defect (ASD), a Ventricular Septal Defect (VSD), or are normal. Two aspects of heart sounds have been specifically investigated; the time-frequency analysis of systolic murmurs and the identification of splitting patterns in the second heart sound. The analysis is based on 42 paediatric heart sound recordings. Murmurs are sounds generated by turbulent flow of blood in the heart. They can be found in patients with both pathological and non-pathological conditions. The acoustic quality of the murmurs generated in each heart condition are different. The first aspect of this work is to characterise the three types of murmurs in the time- frequency domain. Modern time-frequency methods including, the Wigner-Ville Distribution, Smoothed Pseudo Wigner-Ville Distribution, Choi-Williams Distribution and spectrogram have been applied to characterise the murmurs. It was found that the three classes of murmurs exhibited different signatures in their time-frequency representations. By performing Discriminant Analysis, it was shown that spectral features extracted from the time- frequency representations can be used to distinguish between the three classes. The second aspect of the research is to identify splitting patterns in the second heart sound, which consists of two acoustic components due to the closure of the aortic valve and pulmonary valve. The aortic valve usually closes before the pulmonary valve, introducing a time delay known as 'split'. The split normally varies in duration over the respiratory cycle. In certain pathologies such as the ASD, the split becomes fixed over the respiration cycle. A technique based on adaptive signal decomposition is developed to measure the split and hence to identify the splitting pattern as either 'variable' or 'fixed'. This work has successfully characterised the murmurs and splitting patterns in the three groups of patients. Features extracted can be used for diagnostic purposes.

  12. Problems in nonlinear acoustics: Pulsed finite amplitude sound beams, nonlinear acoustic wave propagation in a liquid layer, nonlinear effects in asymmetric cylindrical sound beams, effects of absorption on the interaction of sound beams, and parametric receiving arrays

    NASA Astrophysics Data System (ADS)

    Hamilton, Mark F.

    1990-12-01

    This report discusses five projects all of which involve basic theoretical research in nonlinear acoustics: (1) pulsed finite amplitude sound beams are studied with a recently developed time domain computer algorithm that solves the KZK nonlinear parabolic wave equation; (2) nonlinear acoustic wave propagation in a liquid layer is a study of harmonic generation and acoustic soliton information in a liquid between a rigid and a free surface; (3) nonlinear effects in asymmetric cylindrical sound beams is a study of source asymmetries and scattering of sound by sound at high intensity; (4) effects of absorption on the interaction of sound beams is a completed study of the role of absorption in second harmonic generation and scattering of sound by sound; and (5) parametric receiving arrays is a completed study of parametric reception in a reverberant environment.

  13. Second order hydrodynamics for a special class of gravity duals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, T.

    2009-04-15

    The sound mode hydrodynamic dispersion relation is computed up to order q{sup 3} for a class of gravitational duals which includes both Schwarzschild AdS and Dp-brane metrics. The implications for second order transport coefficients are examined within the context of Israel-Stewart theory. These sound mode results are compared with previously known results for the shear mode. This comparison allows one to determine the third order hydrodynamic contributions to the shear mode for the class of metrics considered here.

  14. Airborne sound transmission loss characteristics of wood-frame construction

    NASA Astrophysics Data System (ADS)

    Rudder, F. F., Jr.

    1985-03-01

    This report summarizes the available data on the airborne sound transmission loss properties of wood-frame construction and evaluates the methods for predicting the airborne sound transmission loss. The first part of the report comprises a summary of sound transmission loss data for wood-frame interior walls and floor-ceiling construction. Data bases describing the sound transmission loss characteristics of other building components, such as windows and doors, are discussed. The second part of the report presents the prediction of the sound transmission loss of wood-frame construction. Appropriate calculation methods are described both for single-panel and for double-panel construction with sound absorption material in the cavity. With available methods, single-panel construction and double-panel construction with the panels connected by studs may be adequately characterized. Technical appendices are included that summarize laboratory measurements, compare measurement with theory, describe details of the prediction methods, and present sound transmission loss data for common building materials.

  15. NEW THORACIC MURMURS, WITH TWO NEW INSTRUMENTS, THE REFRACTOSCOPE AND THE PARTIAL STETHOSCOPE

    PubMed Central

    Parker, Frederick D.

    1918-01-01

    1. An understanding of the physics of sound is essential for a better comprehension of refined auscultation, tone analysis, and the use of these instruments. 2. The detection of variations of the third heart sound should prove a valuable aid in predicting mitral disease. 3. The variations of the outflow sound should prove a valuable aid in determining early aortic lesions with the type of accompanying intimal changes. 4. The character of chamber timbre as distinct from loudness heard as the first and second heart sounds denotes more often the condition of heart muscle, and must not be confounded with valvular disease. 5. The full significance of sound shadows is uncertain. Cardiac sound shadows appear normally in the right axilla and below the left clavicle. Their mode of production is quite clear. 6. Both the third heart sound and the outflow sound may be heard with the ordinary stethoscope. PMID:19868281

  16. Dimensions of vehicle sounds perception.

    PubMed

    Wagner, Verena; Kallus, K Wolfgang; Foehl, Ulrich

    2017-10-01

    Vehicle sounds play an important role concerning customer satisfaction and can show another differentiating factor of brands. With an online survey of 1762 German and American customers, the requirement characteristics of high-quality vehicle sounds were determined. On the basis of these characteristics, a requirement profile was generated for every analyzed sound. These profiles were investigated in a second study with 78 customers using real vehicles. The assessment results of the vehicle sounds can be represented using the dimensions "timbre", "loudness", and "roughness/sharpness". The comparison of the requirement profiles and the assessment results show that the sounds which are perceived as pleasant and high-quality, more often correspond to the requirement profile. High-quality sounds are characterized by the fact that they are rather gentle, soft and reserved, rich, a bit dark and not too rough. For those sounds which are assessed worse by the customers, recommendations for improvements can be derived. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Emergence of band-pass filtering through adaptive spiking in the owl's cochlear nucleus

    PubMed Central

    MacLeod, Katrina M.; Lubejko, Susan T.; Steinberg, Louisa J.; Köppl, Christine; Peña, Jose L.

    2014-01-01

    In the visual, auditory, and electrosensory modalities, stimuli are defined by first- and second-order attributes. The fast time-pressure signal of a sound, a first-order attribute, is important, for instance, in sound localization and pitch perception, while its slow amplitude-modulated envelope, a second-order attribute, can be used for sound recognition. Ascending the auditory pathway from ear to midbrain, neurons increasingly show a preference for the envelope and are most sensitive to particular envelope modulation frequencies, a tuning considered important for encoding sound identity. The level at which this tuning property emerges along the pathway varies across species, and the mechanism of how this occurs is a matter of debate. In this paper, we target the transition between auditory nerve fibers and the cochlear nucleus angularis (NA). While the owl's auditory nerve fibers simultaneously encode the fast and slow attributes of a sound, one synapse further, NA neurons encode the envelope more efficiently than the auditory nerve. Using in vivo and in vitro electrophysiology and computational analysis, we show that a single-cell mechanism inducing spike threshold adaptation can explain the difference in neural filtering between the two areas. We show that spike threshold adaptation can explain the increased selectivity to modulation frequency, as input level increases in NA. These results demonstrate that a spike generation nonlinearity can modulate the tuning to second-order stimulus features, without invoking network or synaptic mechanisms. PMID:24790170

  18. Békésy's contributions to our present understanding of sound conduction to the inner ear.

    PubMed

    Puria, Sunil; Rosowski, John J

    2012-11-01

    In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events: an event-related potential study.

    PubMed

    Liu, B; Wang, Z; Wu, G; Meng, X

    2011-04-28

    In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  1. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  2. Are You Listening to Your Computer?

    ERIC Educational Resources Information Center

    Shugg, Alan

    1992-01-01

    Accepting the great motivational value of computers in second-language learning, this article describes ways to use authentic language recorded on a computer with HyperCard. Graphics, sound, and hardware/software requirements are noted, along with brief descriptions of programing with sound and specific programs. (LB)

  3. Nonlinear Sound Field by Interdigital Transducers in Water

    NASA Astrophysics Data System (ADS)

    Maezawa, Miyuki; Kamada, Rui; Kamakura, Tomoo; Matsuda, Kazuhisa

    2008-05-01

    Nonlinear ultrasound beams in water radiated by a surface acoustic wave (SAW) device are examined experimentally and theoretically. SAWs on an 128° X-cut Y-propagation LiNbO3 substrate are excited by 50 pairs of interdigital transducers (IDTs). The device with a 2 ×10 mm2 rectangular aperture and a center frequency of 20 MHz radiate two ultrasound beams in the direction of the Rayleigh angle determined by the propagation speed of the SAW on the device and of the longitudinal wave in water. The Rayleigh angle becomes 22° in the present experimental situation. The fundamental and second harmonic sound pressures are respectively measured along and across the beam using a miniature hydrophone whose active element 0.4 mm in diameter and whose frequency response is calibrated up to 40 MHz. The Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is utilized to theoretically predict sound pressure amplitudes. The theoretical predictions of both the fundamental and second harmonic pressures agree well with the measured sound pressures.

  4. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  5. Implications of diadochokinesia in children with speech sound disorder.

    PubMed

    Wertzner, Haydée Fiszbein; Pagan-Neves, Luciana de Oliveira; Alves, Renata Ramos; Barrozo, Tatiane Faria

    2013-01-01

    To verify the performance of children with and without speech sound disorder in oral motor skills measured by oral diadochokinesia according to age and gender and to compare the results by two different methods of analysis. Participants were 72 subjects aged from 5 years to 7 years and 11 months divided into four subgroups according to the presence of speech sound disorder (Study Group and Control Group) and age (<6 years and 5 months and >6 years and 5 months). Diadochokinesia skills were assessed by the repetition of the sequences 'pa', 'ta', 'ka' and 'pataka' measured both manually and by the software Motor Speech Profile®. Gender was statistically different for both groups but it did not influence on the number of sequences per second produced. Correlation between the number of sequences per second and age was observed for all sequences (except for 'ka') only for the control group children. Comparison between groups did not indicate differences between the number of sequences per second and age. Results presented strong agreement between the values of oral diadochokinesia measured manually and by MSP. This research demonstrated the importance of using different methods of analysis on the functional evaluation of oro-motor processing aspects of children with speech sound disorder and evidenced the oro-motor difficulties on children aged under than eight years old.

  6. First and second sound in cylindrically trapped gases.

    PubMed

    Bertaina, G; Pitaevskii, L; Stringari, S

    2010-10-08

    We investigate the propagation of density and temperature waves in a cylindrically trapped gas with radial harmonic confinement. Starting from two-fluid hydrodynamic theory we derive effective 1D equations for the chemical potential and the temperature which explicitly account for the effects of viscosity and thermal conductivity. Differently from quantum fluids confined by rigid walls, the harmonic confinement allows for the propagation of both first and second sound in the long wavelength limit. We provide quantitative predictions for the two sound velocities of a superfluid Fermi gas at unitarity. For shorter wavelengths we discover a new surprising class of excitations continuously spread over a finite interval of frequencies. This results in a nondissipative damping in the response function which is analytically calculated in the limiting case of a classical ideal gas.

  7. Sounding Rockets as a Real Flight Platform for Aerothermodynamic Cfd Validation of Hypersonic Flight Experiments

    NASA Astrophysics Data System (ADS)

    Stamminger, A.; Turner, J.; Hörschgen, M.; Jung, W.

    2005-02-01

    This paper describes the possibilities of sounding rockets to provide a platform for flight experiments in hypersonic conditions as a supplement to wind tunnel tests. Real flight data from measurement durations longer than 30 seconds can be compared with predictions from CFD calculations. This paper will regard projects flown on sounding rockets, but mainly describe the current efforts at Mobile Rocket Base, DLR on the SHarp Edge Flight EXperiment SHEFEX.

  8. Low-momentum dynamic structure factor of a strongly interacting Fermi gas at finite temperature: A two-fluid hydrodynamic description

    NASA Astrophysics Data System (ADS)

    Hu, Hui; Zou, Peng; Liu, Xia-Ji

    2018-02-01

    We provide a description of the dynamic structure factor of a homogeneous unitary Fermi gas at low momentum and low frequency, based on the dissipative two-fluid hydrodynamic theory. The viscous relaxation time is estimated and is used to determine the regime where the hydrodynamic theory is applicable and to understand the nature of sound waves in the density response near the superfluid phase transition. By collecting the best knowledge on the shear viscosity and thermal conductivity known so far, we calculate the various diffusion coefficients and obtain the damping width of the (first and second) sounds. We find that the damping width of the first sound is greatly enhanced across the superfluid transition and very close to the transition the second sound might be resolved in the density response for the transferred momentum up to half of Fermi momentum. Our work is motivated by the recent measurement of the local dynamic structure factor at low momentum at Swinburne University of Technology and the ongoing experiment on sound attenuation of a homogeneous unitary Fermi gas at Massachusetts Institute of Technology. We discuss how the measurement of the velocity and damping width of the sound modes in low-momentum dynamic structure factor may lead to an improved determination of the universal superfluid density, shear viscosity, and thermal conductivity of a unitary Fermi gas.

  9. Experimental Simulation of Active Control With On-line System Identification on Sound Transmission Through an Elastic Plate

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An adaptive control algorithm with on-line system identification capability has been developed. One of the great advantages of this scheme is that an additional system identification mechanism such as an additional uncorrelated random signal generator as the source of system identification is not required. A time-varying plate-cavity system is used to demonstrate the control performance of this algorithm. The time-varying system consists of a stainless-steel plate which is bolted down on a rigid cavity opening where the cavity depth was changed with respect to time. For a given externally located harmonic sound excitation, the system identification and the control are simultaneously executed to minimize the transmitted sound in the cavity. The control performance of the algorithm is examined for two cases. First, all the water was drained, the external disturbance frequency is swept with 1 Hz/sec. The result shows an excellent frequency tracking capability with cavity internal sound suppression of 40 dB. For the second case, the water level is initially empty and then raised to 3/20 full in 60 seconds while the external sound excitation is fixed with a frequency. Hence, the cavity resonant frequency decreases and passes the external sound excitation frequency. The algorithm shows 40 dB transmitted noise suppression without compromising the system identification tracking capability.

  10. 46 CFR 95.15-30 - Alarms.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Dioxide Extinguishing Systems, Details § 95.15-30 Alarms. (a) Spaces which are protected by a carbon... audible alarm in such spaces which will be automatically sounded when the carbon dioxide is admitted to... sound during the 20 second delay period prior to the discharge of carbon dioxide into the space, and the...

  11. 46 CFR 193.15-30 - Alarms.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to persons on board while the vessel is being navigated which are protected by a carbon dioxide... automatically sounded when the carbon dioxide is admitted to the space. The alarm shall be conspicuously and... arranged as to sound during the 20-second delay period prior to the discharge of carbon dioxide into the...

  12. 46 CFR 193.15-30 - Alarms.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to persons on board while the vessel is being navigated which are protected by a carbon dioxide... automatically sounded when the carbon dioxide is admitted to the space. The alarm shall be conspicuously and... arranged as to sound during the 20-second delay period prior to the discharge of carbon dioxide into the...

  13. Blood pressure reprogramming adapter assists signal recording

    NASA Technical Reports Server (NTRS)

    Vick, H. A.

    1967-01-01

    Blood pressure reprogramming adapter separates the two components of a blood pressure signal, a dc pressure signal and an ac Korotkoff sounds signal, so that the Korotkoff sounds are recorded on one channel as received while the dc pressure signal is converted to FM and recorded on a second channel.

  14. Techniques for decoding speech phonemes and sounds: A concept

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.; Holby, H. G.

    1975-01-01

    Techniques studied involve conversion of speech sounds into machine-compatible pulse trains. (1) Voltage-level quantizer produces number of output pulses proportional to amplitude characteristics of vowel-type phoneme waveforms. (2) Pulses produced by quantizer of first speech formants are compared with pulses produced by second formants.

  15. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  16. A Series of Case Studies of Tinnitus Suppression With Mixed Background Stimuli in a Cochlear Implant

    PubMed Central

    Keiner, A. J.; Walker, Kurt; Deshpande, Aniruddha K.; Witt, Shelley; Killian, Matthijs; Ji, Helena; Patrick, Jim; Dillier, Norbert; van Dijk, Pim; Lai, Wai Kong; Hansen, Marlan R.; Gantz, Bruce

    2015-01-01

    Purpose Background sounds provided by a wearable sound playback device were mixed with the acoustical input picked up by a cochlear implant speech processor in an attempt to suppress tinnitus. Method First, patients were allowed to listen to several sounds and to select up to 4 sounds that they thought might be effective. These stimuli were programmed to loop continuously in the wearable playback device. Second, subjects were instructed to use 1 background sound each day on the wearable device, and they sequenced the selected background sounds during a 28-day trial. Patients were instructed to go to a website at the end of each day and rate the loudness and annoyance of the tinnitus as well as the acceptability of the background sound. Patients completed the Tinnitus Primary Function Questionnaire (Tyler, Stocking, Secor, & Slattery, 2014) at the beginning of the trial. Results Results indicated that background sounds were very effective at suppressing tinnitus. There was considerable variability in sounds preferred by the subjects. Conclusion The study shows that a background sound mixed with the microphone input can be effective for suppressing tinnitus during daily use of the sound processor in selected cochlear implant users. PMID:26001407

  17. Hybrid mode-scattering/sound-absorbing segmented liner system and method

    NASA Technical Reports Server (NTRS)

    Walker, Bruce E. (Inventor); Hersh, Alan S. (Inventor); Rice, Edward J. (Inventor)

    1999-01-01

    A hybrid mode-scattering/sound-absorbing segmented liner system and method in which an initial sound field within a duct is steered or scattered into higher-order modes in a first mode-scattering segment such that it is more readily and effectively absorbed in a second sound-absorbing segment. The mode-scattering segment is preferably a series of active control components positioned along the annulus of the duct, each of which includes a controller and a resonator into which a piezoelectric transducer generates the steering noise. The sound-absorbing segment is positioned acoustically downstream of the mode-scattering segment, and preferably comprises a honeycomb-backed passive acoustic liner. The invention is particularly adapted for use in turbofan engines, both in the inlet and exhaust.

  18. Use of quantitative ultrasonography in differentiating osteomalacia from osteoporosis: preliminary study.

    PubMed

    Luisetto, G; Camozzi, V; De Terlizzi, F

    2000-04-01

    The aim of this work was to use ultrasonographic technology to differentiate osteoporosis from osteomalacia on the basis of different patterns of the graphic trace. Three patients with osteomalacia and three with osteoporosis, all with the same lumbar spine bone mineral density, were studied. The velocity of the ultrasound beam in bone was measured by a DBM Sonic 1,200/I densitometer at the proximal phalanges of the hands in all the patients. The ultrasound beam velocity was measured when the first peak of the waveform reached a predetermined minimum amplitude value (amplitude-dependent speed of sound) as well as at the lowest point prior to the first and second peaks, before they reached the predetermined minimum amplitude value (first and second minimum speeds of sound). The graphic traces were further analyzed by Fourier analysis, and both the main frequency (f0) and the width of the peak centered in the f0 (full width at half maximum) were measured. The first and second minimum speeds of sound were significantly lower in the patients with osteomalacia than in the osteoporosis group. The first minimum speed of sound was 2,169 +/- 73 m/s in osteoporosis and 1,983 +/- 61 m/s in osteomalacia (P < 0.0001); the second minimum peak speed of sound was 1,895 +/-59 m/s in osteoporosis and 1,748 +/- 38 m/s in osteomalacia (P < 0.0001). The f0 was similar in the two groups (osteoporosis, 0.85 +/- 0.14 MHz; osteomalacia, 0.9 +/- 0.22 MHz; P = 0.72), and the full width at half maximum was significantly higher in the osteomalacia patients (0.52 +/- 0.14 MHz) than in the osteoporosis patients (0.37 +/- 0.15 MHz) (P = 0.022). This study confirms that ultrasonography is a promising, noninvasive method that could be used to differentiate osteoporosis from osteomalacia, but further studies should be carried out before this method can be introduced into clinical practice.

  19. On the Correction of Shipboard Miniradiosondes of the Western Mediterranean Circulation Experiment - June 1986

    DTIC Science & Technology

    1989-03-01

    but no attempt was made at correction. The modification of the ambient atmospheric and oceanic environments due to the presence of a ship has been...in June, 1986. Two cruises were aboard the research vessel USNS Lynch. On the first cruise, 13 soundings were made in the western Mediterranean...between Spain and Algeria; on the second, 26 soundings were made near the Strait of Gibraltar. The third cruise, for which 16 soundings are available, was

  20. Aeroacoustic Improvements to Fluidic Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Henderson, Brenda; Kinzie, Kevin; Whitmire, Julia; Abeysinghe, Amal

    2006-01-01

    Fluidic chevrons use injected air near the trailing edge of a nozzle to emulate mixing and jet noise reduction characteristics of mechanical chevrons. While previous investigations of "first generation" fluidic chevron nozzles showed only marginal improvements in effective perceived noise levels when compared to nozzles without injection, significant improvements in noise reduction characteristics were achieved through redesigned "second generation" nozzles on a bypass ratio 5 model system. The second-generation core nozzles had improved injection passage contours, external nozzle contour lines, and nozzle trailing edges. The new fluidic chevrons resulted in reduced overall sound pressure levels over that of the baseline nozzle for all observation angles. Injection ports with steep injection angles produced lower overall sound pressure levels than those produced by shallow injection angles. The reductions in overall sound pressure levels were the result of noise reductions at low frequencies. In contrast to the first-generation nozzles, only marginal increases in high frequency noise over that of the baseline nozzle were observed for the second-generation nozzles. The effective perceived noise levels of the new fluidic chevrons are shown to approach those of the core mechanical chevrons.

  1. Using second-sound shock waves to probe the intrinsic critical velocity of liquid helium II

    NASA Technical Reports Server (NTRS)

    Turner, T. N.

    1983-01-01

    A critical velocity truly intrinsic to liquid helium II is experimentally sought in the bulk fluid far from the apparatus walls. Termed the 'fundamental critical velocity,' it necessarily is caused by mutual interactions which operate between the two fluid components and which are activated at large relative velocities. It is argued that flow induced by second-sound shock waves provides the ideal means by which to activate and isolate the fundamental critical velocity from other extraneous fluid-wall interactions. Experimentally it is found that large-amplitude second-sound shock waves initiate a breakdown in the superfluidity of helium II, which is dramatically manifested as a limit to the maximum attainable shock strength. This breakdown is shown to be caused by a fundamental critical velocity. Secondary effects include boiling for ambient pressures near the saturated vapor pressure or the formation of helium I boundary layers at higher ambient pressures. When compared to the intrinsic critical velocity discovered in highly restricted geometries, the shock-induced critical velocity displays a similar temperature dependence and is the same order of magnitude.

  2. Diffuse spreading of inhomogeneities in the ionospheric dusty plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalimov, S. L., E-mail: pmsk7@mail.ru; Kozlovsky, A.

    2015-08-15

    According to results of sounding of the lower ionosphere at altitudes of about 100 km, the duration of radio reflections from sufficiently dense ionized meteor trails, which characterizes their lifetime, can reach a few tens of seconds to several tens of minutes. This is much longer than the characteristic spreading time (on the order of fractions of a second to several seconds) typical in meteor radar measurements. The presence of dust in the lower ionosphere is shown to affect the ambipolar diffusion coefficient, which determines the spreading of plasma inhomogeneities. It is found that the diffusion coefficient depends substantially onmore » the charge and size of dust grains, which allows one to explain the results of ionospheric sounding.« less

  3. Ultrasonic Recovery and Modification of Food Ingredients

    NASA Astrophysics Data System (ADS)

    Vilkhu, Kamaljit; Manasseh, Richard; Mawson, Raymond; Ashokkumar, Muthupandian

    There are two general classes of effects that sound, and ultrasound in particular, can have on a fluid. First, very significant modifications to the nature of food and food ingredients can be due to the phenomena of bubble acoustics and cavitation. The applied sound oscillates bubbles in the fluid, creating intense forces at microscopic scales thus driving chemical changes. Second, the sound itself can cause the fluid to flow vigorously, both on a large scale and on a microscopic scale; furthermore, the sound can cause particles in the fluid to move relative to the fluid. These streaming phenomena can redistribute materials within food and food ingredients at both microscopic and macroscopic scales.

  4. 33 CFR 83.35 - Sound signals in restricted visibility (Rule 35).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... more than 2 minutes two prolonged blasts in succession with an interval of about 2 seconds between them... than 2 minutes, three blasts in succession; namely, one prolonged followed by two short blasts. (d..., shall at intervals of not more than 2 minutes sound four blasts in succession; namely, one prolonged...

  5. 33 CFR 83.35 - Sound signals in restricted visibility (Rule 35).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... more than 2 minutes two prolonged blasts in succession with an interval of about 2 seconds between them... than 2 minutes, three blasts in succession; namely, one prolonged followed by two short blasts. (d..., shall at intervals of not more than 2 minutes sound four blasts in succession; namely, one prolonged...

  6. 33 CFR 83.35 - Sound signals in restricted visibility (Rule 35).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... more than 2 minutes two prolonged blasts in succession with an interval of about 2 seconds between them... than 2 minutes, three blasts in succession; namely, one prolonged followed by two short blasts. (d..., shall at intervals of not more than 2 minutes sound four blasts in succession; namely, one prolonged...

  7. 33 CFR 83.35 - Sound signals in restricted visibility (Rule 35).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... more than 2 minutes two prolonged blasts in succession with an interval of about 2 seconds between them... than 2 minutes, three blasts in succession; namely, one prolonged followed by two short blasts. (d..., shall at intervals of not more than 2 minutes sound four blasts in succession; namely, one prolonged...

  8. The Relationship between Inexperienced Listeners' Perceptions and Acoustic Correlates of Children's /r/ Productions

    ERIC Educational Resources Information Center

    Klein, Harriet B.; Grigos, Maria I.; Byun, Tara McAllister; Davidson, Lisa

    2012-01-01

    This study examined inexperienced listeners' perceptions of children's naturally produced /r/ sounds with reference to levels of accuracy determined by consensus between two expert clinicians. Participants rated /r/ sounds as fully correct, distorted or incorrect/non-rhotic. Second and third formant heights were measured to explore the…

  9. Identifying Residual Speech Sound Disorders in Bilingual Children: A Japanese-English Case Study

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Seki, Ayumi

    2011-01-01

    Purpose: To describe (a) the assessment of residual speech sound disorders (SSDs) in bilinguals by distinguishing speech patterns associated with second language acquisition from patterns associated with misarticulations and (b) how assessment of domains such as speech motor control and phonological awareness can provide a more complete…

  10. Autonomy in Second Language Phonology: Choice vs. Limits

    ERIC Educational Resources Information Center

    Moyer, Alene

    2017-01-01

    Learning a new sound system poses challenges of a social, psychological, and cognitive nature, but the learner's decisions are key to ultimate attainment. This presentation focuses on two essential concepts: CHOICE, or how one wants to sound in the target language; and LIMITS, or various challenges to one's goals vis-a-vis accent. Qualitative and…

  11. Children's Moral and Ecological Reasoning about the Prince William Sound Oil Spill.

    ERIC Educational Resources Information Center

    Kahn, Peter H., Jr.; Friedman, Batya

    This study investigated children's moral and ecological conceptions and values about an actual, environmentally destructive accident, the large oil spill that occurred in Prince William Sound, Alaska in 1989. Sixty children from second, fifth, and eighth grades were interviewed on children's reasoning and understandings about the oil spill which…

  12. Learning Midlevel Auditory Codes from Natural Sound Statistics.

    PubMed

    Młynarski, Wiktor; McDermott, Josh H

    2018-03-01

    Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.

  13. Auditory laterality in a nocturnal, fossorial marsupial (Lasiorhinus latifrons) in response to bilateral stimuli.

    PubMed

    Descovich, K A; Reints Bok, T E; Lisle, A T; Phillips, C J C

    2013-01-01

    Behavioural lateralisation is evident across most animal taxa, although few marsupial and no fossorial species have been studied. Twelve wombats (Lasiorhinus latifrons) were bilaterally presented with eight sounds from different contexts (threat, neutral, food) to test for auditory laterality. Head turns were recorded prior to and immediately following sound presentation. Behaviour was recorded for 150 seconds after presentation. Although sound differentiation was evident by the amount of exploration, vigilance, and grooming performed after different sound types, this did not result in different patterns of head turn direction. Similarly, left-right proportions of head turns, walking events, and food approaches in the post-sound period were comparable across sound types. A comparison of head turns performed before and after sound showed a significant change in turn direction (χ(2) (1)=10.65, p=.001) from a left preference during the pre-sound period (mean 58% left head turns, CI 49-66%) to a right preference in the post-sound (mean 43% left head turns, CI 40-45%). This provides evidence of a right auditory bias in response to the presentation of the sound. This study therefore demonstrates that laterality is evident in southern hairy-nosed wombats in response to a sound stimulus, although side biases were not altered by sounds of varying context.

  14. Spectral timbre perception in ferrets: discrimination of artificial vowels under different listening conditions.

    PubMed

    Bizley, Jennifer K; Walker, Kerry M M; King, Andrew J; Schnupp, Jan W H

    2013-01-01

    Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.

  15. Spectral timbre perception in ferrets; discrimination of artificial vowels under different listening conditions

    PubMed Central

    Bizley, Jennifer K; Walker, Kerry MM; King, Andrew J; Schnupp, Jan WH

    2013-01-01

    Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/, and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners. PMID:23297909

  16. Effects of sounding temperature assimilation on weather forecasting - Model dependence studies

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Halem, M.; Atlas, R.

    1979-01-01

    In comparing various methods for the assimilation of remote sounding information into numerical weather prediction (NWP) models, the problem of model dependence for the different results obtained becomes important. The paper investigates two aspects of the model dependence question: (1) the effect of increasing horizontal resolution within a given model on the assimilation of sounding data, and (2) the effect of using two entirely different models with the same assimilation method and sounding data. Tentative conclusions reached are: first, that model improvement as exemplified by increased resolution, can act in the same direction as judicious 4-D assimilation of remote sounding information, to improve 2-3 day numerical weather forecasts. Second, that the time continuous 4-D methods developed at GLAS have similar beneficial effects when used in the assimilation of remote sounding information into NWP models with very different numerical and physical characteristics.

  17. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  18. Embedded System Implementation of Sound Localization in Proximal Region

    NASA Astrophysics Data System (ADS)

    Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao

    A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.

  19. Numerical and Physical Modeling of the Response of Resonator Liners to Intense Sound and Grazing Flow

    NASA Technical Reports Server (NTRS)

    Hersh, Alan S.; Tam, Christopher

    2009-01-01

    Two significant advances have been made in the application of computational aeroacoustics methodology to acoustic liner technology. The first is that temperature effects for discrete sound are not the same as for broadband noise. For discrete sound, the normalized resistance appears to be insensitive to temperature except at high SPL. However, reactance is lower, significantly lower in absolute value, at high temperature. The second is the numerical investigation the acoustic performance of a liner by direct numerical simulation. Liner impedance is affected by the non-uniformity of the incident sound waves. This identifies the importance of pressure gradient. Preliminary design one and two-dimensional impedance models were developed to design sound absorbing liners in the presence of intense sound and grazing flow. The two-dimensional model offers the potential to empirically determine incident sound pressure face-plate distance from resonator orifices. This represents an important initial step in improving our understanding of how to effectively use the Dean Two-Microphone impedance measurement method.

  20. Musical aptitude and second language pronunciation skills in school-aged children: neural and behavioral evidence.

    PubMed

    Milovanov, Riia; Huotilainen, Minna; Välimäki, Vesa; Esquef, Paulo A A; Tervaniemi, Mari

    2008-02-15

    The main focus of this study was to examine the relationship between musical aptitude and second language pronunciation skills. We investigated whether children with superior performance in foreign language production represent musical sound features more readily in the preattentive level of neural processing compared with children with less-advanced production skills. Sound processing accuracy was examined in elementary school children by means of event-related potential (ERP) recordings and behavioral measures. Children with good linguistic skills had better musical skills as measured by the Seashore musicality test than children with less accurate linguistic skills. The ERP data accompany the results of the behavioral tests: children with good linguistic skills showed more pronounced sound-change evoked activation with the music stimuli than children with less accurate linguistic skills. Taken together, the results imply that musical and linguistic skills could partly be based on shared neural mechanisms.

  1. The 'sail sound' and tricuspid regurgitation in Ebstein's anomaly: the value of echocardiography in evaluating their mechanisms.

    PubMed

    Oki, T; Fukuda, N; Tabata, T; Yamada, H; Manabe, K; Fukuda, K; Abe, M; Iuchi, A; Ito, S

    1997-03-01

    We describe a patient with Ebstein's anomaly in whom Doppler echocardiography was used to clarify the mechanism responsible for 'sail sound' and tricuspid regurgitation associated with this condition. Phonocardiography revealed an additional early systolic heart sound, consisting of a first low-amplitude component (T1) and a second high-amplitude component (T2, 'sail sound'). In simultaneous recordings of the tricuspid valve motion using M mode echocardiography and phonocardiography, the closing of the tricuspid valve occurred with T1 which originated at the tip of the tricuspid leaflets, while T2 originated from the body of the tricuspid leaflets. Using color Doppler imaging, the tricuspid regurgitant signal was detected during pansystole, indicating a blue signal during the phase corresponding to T1 and a mosaic signal during the phase corresponding to T2 at end-systole. Thus, 'sail sound' in patients with Ebstein's anomaly is not simply a closing sound of the tricuspid valve, but a complex closing sound which includes a sudden stopping sound after the anterior and/or other tricuspid leaflets balloon out at systole.

  2. Listeners' identification and discrimination of digitally manipulated sounds as prolongations.

    PubMed

    Kawai, Norimune; Healey, E Charles; Carrell, Thomas D

    2007-08-01

    The present study had two main purposes. One was to examine if listeners perceive gradually increasing durations of a voiceless fricative categorically ("fluent" versus "stuttered") or continuously (gradient perception from fluent to stuttered). The second purpose was to investigate whether there are gender differences in how listeners perceive various duration of sounds as "prolongations." Forty-four listeners were instructed to rate the duration of the // in the word "shape" produced by a normally fluent speaker. The target word was embedded in the middle of an experimental phrase and the initial // sound was digitally manipulated to create a range of fluent to stuttered sounds. This was accomplished by creating 20 ms stepwise increments for sounds ranging from 120 to 500 ms in duration. Listeners were instructed to give a rating of 1 for a fluent word and a rating of 100 for a stuttered word. The results showed listeners perceived the range of sounds continuously. Also, there was a significant gender difference in that males rated fluent sounds higher than females but female listeners rated stuttered sounds higher than males. The implications of these results are discussed.

  3. Fluttering wing feathers produce the flight sounds of male streamertail hummingbirds.

    PubMed

    Clark, Christopher James

    2008-08-23

    Sounds produced continuously during flight potentially play important roles in avian communication, but the mechanisms underlying these sounds have received little attention. Adult male Red-billed Streamertail hummingbirds (Trochilus polytmus) bear elongated tail streamers and produce a distinctive 'whirring' flight sound, whereas subadult males and females do not. The production of this sound, which is a pulsed tone with a mean frequency of 858 Hz, has been attributed to these distinctive tail streamers. However, tail-less streamertails can still produce the flight sound. Three lines of evidence implicate the wings instead. First, it is pulsed in synchrony with the 29 Hz wingbeat frequency. Second, a high-speed video showed that primary feather eight (P8) bends during each downstroke, creating a gap between P8 and primary feather nine (P9). Manipulating either P8 or P9 reduced the production of the flight sound. Third, laboratory experiments indicated that both P8 and P9 can produce tones over a range of 700-900 Hz. The wings therefore produce the distinctive flight sound, enabled via subtle morphological changes to the structure of P8 and P9.

  4. The Perception of Second Language Sounds in Early Bilinguals: New Evidence from an Implicit Measure

    ERIC Educational Resources Information Center

    Navarra, Jordi; Sebastian-Galles, Nuria; Soto-Faraco, Salvador

    2005-01-01

    Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish).…

  5. 78 FR 76040 - Airworthiness Directives; Piper Aircraft, Inc. Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-16

    ... machined step wedge made of 4340 steel (or similar steel with equivalent sound velocity) or at least three... procedure is used to set the sound velocity. 6. Obtain a step wedge or steel shims per item 3 of the... that the gate is triggered by the second backwall reflection of the thick section. If the digital...

  6. 78 FR 49221 - Airworthiness Directives; Piper Aircraft, Inc. Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-13

    ... machined step wedge made of 4340 steel (or similar steel with equivalent sound velocity) or at least three... procedure is used to set the sound velocity. 6. Obtain a step wedge or steel shims per item 3 of the... that the gate is triggered by the second backwall reflection of the thick section. If the digital...

  7. 78 FR 3356 - Airworthiness Directives; Various Aircraft Equipped With Wing Lift Struts

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-16

    ... (or similar steel with equivalent sound velocity) or at least three shim samples of same material will... procedure is used to set the sound velocity. 6. Obtain a step wedge or steel shims per item 3 of the... that the gate is triggered by the second backwall reflection of the thick section. If the digital...

  8. Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.

    NASA Astrophysics Data System (ADS)

    van Doren, Thomas Walter

    1993-01-01

    This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.

  9. A comparative study of electronic stethoscopes for cardiac auscultation.

    PubMed

    Pinto, C; Pereira, D; Ferreira-Coimbra, J; Portugues, J; Gama, V; Coimbra, M

    2017-07-01

    There are several electronic stethoscopes available on the market today, with a very high potential for healthcare namely telemedicine, assisted decision and education. However, there are no recent comparatives studies published about the recording quality of auscultation sounds. In this study we aim to: a) define a ranking, according to experts opinion of 6 of the most relevant electronic stethoscopes on the market today; b) verify if there are any relations between a stethoscope's performance and the type of pathology present; c) analyze if some pathologies are more easily identified than others when using electronic auscultation. Our methodology consisted in creating two study groups: the first group included 18 cardiologists and cardiology house officers, acting as the gold standard of this work. The second included 30 medical students. Using a database of heart sounds recorded in real hospital environments, we applied questionnaires to observers from each group. The first group listened to 60 cardiac auscultations recorded by the 6 stethoscopes, and each one was asked to identify the pathological sound present: aortic stenosis, mitral regurgitation or normal. The second group was asked to choose, between two auscultation recordings, using as criteria the best sound quality for the identification of pathological sounds. Results include a total of 1080 evaluations, in which 72% of cases were correctly diagnosed. A detailed breakdown of these results is presented in this paper. As conclusions, results showed that the impact of the differences between stethoscopes is very small, given that we did not find statistically significant differences between all pairs of stethoscopes. Normal sounds showed to be easier to identify than pathological sounds, but we did not find differences between stethoscopes in this identification.

  10. Use of tracheal auscultation for the assessment of bronchial responsiveness in asthmatic children.

    PubMed Central

    Sprikkelman, A. B.; Grol, M. H.; Lourens, M. S.; Gerritsen, J.; Heymans, H. S.; van Aalderen, W. M.

    1996-01-01

    BACKGROUND: It can be difficult to assess bronchial responsiveness in children because of their inability to perform spirometric tests reliably. In bronchial challenges lung sounds could be used to detect the required 20% fall in the forced expiratory volume in one second (FEV1). A study was undertaken to determine whether a change in lung sounds corresponded with a 20% fall in FEV1 after methacholine challenge, and whether the occurrence of wheeze was the most important change. METHODS: Fifteen children with asthma (eight boys) of mean age 10.8 years (range 8-15) were studied. All had normal chest auscultation before the methacholine challenge test. Lung sounds were recorded over the trachea for one minute and stored on tape. They were analysed directly and also scored blindly from the tape recording by a second investigator. Wheeze, cough, increase in respiratory rate, and prolonged expiration were assessed. RESULTS: The total cumulative methacholine dose causing a fall in FEV1 of 20% or more (PD20) was detected in 12 children by a change in lung sounds - in four by wheeze and in eight by cough, increased respiratory rate, and/or prolonged expiration. In two subjects altered lung sounds were detectable one dose step before PD20 was reached. In three cases in whom no fall in FEV1 occurred, no change in lung sounds could be detected at the highest methacholine dose. CONCLUSION: Changes in lung sounds correspond well with a 20% fall in FEV1 after methacholine challenge. Wheeze is an insensitive indicator for assessing bronchial responsiveness. Cough, increase in respiratory rate, and prolonged expiration occurs more frequently. PMID:8779140

  11. Frequency Dynamics of the First Heart Sound

    NASA Astrophysics Data System (ADS)

    Wood, John Charles

    Cardiac auscultation is a fundamental clinical tool but first heart sound origins and significance remain controversial. Previous clinical studies have implicated resonant vibrations of both the myocardium and the valves. Accordingly, the goals of this thesis were threefold, (1) to characterize the frequency dynamics of the first heart sound, (2) to determine the relative contribution of the myocardium and the valves in determining first heart sound frequency, and (3) to develop new tools for non-stationary signal analysis. A resonant origin for first heart sound generation was tested through two studies in an open-chest canine preparation. Heart sounds were recorded using ultralight acceleration transducers cemented directly to the epicardium. The first heart sound was observed to be non-stationary and multicomponent. The most dominant feature was a powerful, rapidly-rising frequency component that preceded mitral valve closure. Two broadband components were observed; the first coincided with mitral valve closure while the second significantly preceded aortic valve opening. The spatial frequency of left ventricular vibrations was both high and non-stationary which indicated that the left ventricle was not vibrating passively in response to intracardiac pressure fluctuations but suggested instead that the first heart sound is a propagating transient. In the second study, regional myocardial ischemia was induced by left coronary circumflex arterial occlusion. Acceleration transducers were placed on the ischemic and non-ischemic myocardium to determine whether ischemia produced local or global changes in first heart sound amplitude and frequency. The two zones exhibited disparate amplitude and frequency behavior indicating that the first heart sound is not a resonant phenomenon. To objectively quantify the presence and orientation of signal components, Radon transformation of the time -frequency plane was performed and found to have considerable potential for pattern classification. Radon transformation of the Wigner spectrum (Radon-Wigner transform) was derived to be equivalent to dechirping in the time and frequency domains. Based upon this representation, an analogy between time-frequency estimation and computed tomography was drawn. Cohen's class of time-frequency representations was subsequently shown to result from simple changes in reconstruction filtering parameters. Time-varying filtering, adaptive time-frequency transformation and linear signal synthesis were also performed from the Radon-Wigner representation.

  12. [Echolocation calls of free-flying Himalayan swiftlets (Aerodramus brevirostris)].

    PubMed

    Wang, Bin; Ma, Jian-Zhang; Chen, Yi; Tan, Liang-Jing; Liu, Qi; Shen, Qi-Qi; Liao, Qing-Yi; Zhang, Li-Biao

    2013-02-01

    Here, we present our findings of free-flying echolocation calls of Himalayan swiftlets (Aerodramus brevirostris), which were recorded in Shenjing Cave, Hupingshan National Reserve, Shimen County, Hunan Province in June 2012, using Avisoft-UltraSoundGate 116(e). We noted that after foraging at dusk, the Himalayan swiftlets flew fast into the cave without clicks, and then slowed down in dark area in the cave, but with sounds. The echolocation sounds of Himalayan swiftlets are broadband, double noise burst clicks, separated by a short pause. The inter-pulse intervals between double clicks (99.3±3.86 ms)were longer than those within double clicks (6.6±0.42 ms) (P<0.01). With the exception of peak frequency, between 6.2±0.08 kHz and 6.2±0.10 kHz, (P>0.05) and pulse duration 2.9±0.12 ms, 3.2±0.17 ms, (P>0.05) between the first and second, other factors-maximum frequency, minimum frequency, frequency bandwidth, and power-were significantly different between the clicks. The maximum frequency of the first pulse (20.1±1.10 kHz) was higher than that of second (15.4±0.98 kHz) (P<0.01), while the minimum frequency of the first pulse (3.7±0.12 kHz) was lower than that of second (4.0±0.09 kHz) (P<0.05); resulting in the frequency bandwidth of the first pulse (16.5±1.17 kHz) longer than that of second (11.4±1.01 kHz) (P<0.01). The power of the first pulse (-32.5±0.60 dB) was higher than that of second (-35.2±0.94 dB) (P<0.05). More importantly, we found that Himalayan swiftlets emitted echolocation pulses including ultrasonic sound, with a maximum frequency reaching 33.2 kHz.

  13. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    PubMed

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  14. Perception of touch quality in piano tones.

    PubMed

    Goebl, Werner; Bresin, Roberto; Fujinaga, Ichiro

    2014-11-01

    Both timbre and dynamics of isolated piano tones are determined exclusively by the speed with which the hammer hits the strings. This physical view has been challenged by pianists who emphasize the importance of the way the keyboard is touched. This article presents empirical evidence from two perception experiments showing that touch-dependent sound components make sounds with identical hammer velocities but produced with different touch forms clearly distinguishable. The first experiment focused on finger-key sounds: musicians could identify pressed and struck touches. When the finger-key sounds were removed from the sounds, the effect vanished, suggesting that these sounds were the primary identification cue. The second experiment looked at key-keyframe sounds that occur when the key reaches key-bottom. Key-bottom impact was identified from key motion measured by a computer-controlled piano. Musicians were able to discriminate between piano tones that contain a key-bottom sound from those that do not. However, this effect might be attributable to sounds associated with the mechanical components of the piano action. In addition to the demonstrated acoustical effects of different touch forms, visual and tactile modalities may play important roles during piano performance that influence the production and perception of musical expression on the piano.

  15. Speech research

    NASA Astrophysics Data System (ADS)

    1992-06-01

    Phonology is traditionally seen as the discipline that concerns itself with the building blocks of linguistic messages. It is the study of the structure of sound inventories of languages and of the participation of sounds in rules or processes. Phonetics, in contrast, concerns speech sounds as produced and perceived. Two extreme positions on the relationship between phonological messages and phonetic realizations are represented in the literature. One holds that the primary home for linguistic symbols, including phonological ones, is the human mind, itself housed in the human brain. The second holds that their primary home is the human vocal tract.

  16. Direct speed of sound measurement within the atmosphere during a national holiday in New Zealand

    NASA Astrophysics Data System (ADS)

    Vollmer, M.

    2018-05-01

    Measuring the speed of sound belongs to almost any physics curriculum. Two methods dominate, measuring resonance phenomena of standing waves or time-of-flight measurements. The second type is conceptually simpler, however, performing such experiments with dimensions of meters usually requires precise electronic time measurement equipment if accurate results are to be obtained. Here a time-of-flight measurement from a video recording is reported with a dimension of several km and an accuracy for the speed of sound of the order of 1%.

  17. Priming Gestures with Sounds

    PubMed Central

    Lemaitre, Guillaume; Heller, Laurie M.; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas

    2015-01-01

    We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge. PMID:26544884

  18. The XQC microcalorimeter sounding rocket: a stable LTD platform 30 seconds after rocket motor burnout

    NASA Astrophysics Data System (ADS)

    Porter, F. S.; Almy, R.; Apodaca, E.; Figueroa-Feliciano, E.; Galeazzi, M.; Kelley, R.; McCammon, D.; Stahle, C. K.; Szymkowiak, A. E.; Sanders, W. T.

    2000-04-01

    The XQC microcalorimeter sounding rocket experiment is designed to provide a stable thermal environment for an LTD detector system within 30 s of the burnout of its second stage rocket motor. The detector system used for this instrument is a 36-pixel microcalorimeter array operated at 60 mK with a single-stage adiabatic demagnetization refrigerator (ADR). The ADR is mounted on a space-pumped liquid helium tank with vapor cooled shields which is vibration isolated from the rocket structure. We present here some of the design and performance details of this mature LTD instrument, which has just completed its third suborbital flight.

  19. Steady-state and second-sound measurements of Kapitza resistance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katerberg, James Alan

    1980-01-01

    Published steady-state (dc) and second-sound (ac) measurements of the Kapitza resistance (R K) have differed in reports of the temperature dependence of R K. The two types of measurements were also seen to conflict on the measured effects of sample damage on the magnitude of R K. To resolve these differences, measurements of R K have been made using both techniques on the same sample, during the same experimental run. Our measurements, made on copper-liquid helium interfaces from 1.1 to 2.1 K, show excellent agreement between the dc and ac results. No evidence is seen for a frequency-dependent Kapitza resistance.more » Our measurements show an increase in R K when the sample is damaged, agreeing with published ac measurements, but disagreeing with published dc measurements. The temperature dependence of R K in our measurements is approximately T -3 from 1.5 to 2.1 K, in agreement with published dc measurements. A T -4 dependence has been seen in the published ac experiments. In our experiments, a T -4 dependence is observed only when second sound is coupled from the generating cavity to the helium bath.« less

  20. Transition edge sensors for quench localization in SRF cavity tests

    NASA Astrophysics Data System (ADS)

    Furci, H.; Kovács, Z.; Koettig, T.; Vandoni, G.

    2017-12-01

    Transition Edge Sensors (TES) are bolometers based on the gradual superconducting transition of a thin film alloy. In the frame of improvement of non-contact thermal mapping for quench localisation in SRF cavity tests, TES have been developed in-house at CERN. Based on modern photolithography techniques, a fabrication method has been established and used to produce TES from Au-Sn alloys. The fabricated sensors superconducting transitions were characterised. The sensitive temperature range of the sensors spreads over 100 mK to 200 mK and its centre can be shifted by the bias current applied between 1.5 K and 2.1 K. Maximum sensitivity being in the range of 0.5 mV/mK, it is possible to detect fast temperature variations (in the 50 μs range) below 1 mK. All these characteristics are an asset for the detection of second sound. Second sound was produced by heaters and the TES were able to distinctively detect it. The value of the speed of second sound was determined and corresponds remarkably with literature values. Furthermore, there is a clear correlation between intensity of the signal and distance, opening possibilities for a more precise signal interpretation in quench localisation.

  1. Revisit of the relationship between the elastic properties and sound velocities at high pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chenju; Yan, Xiaozhen; Institute of Atomic and Molecular Sciences, Sichuan University, Chengdu 610065

    2014-09-14

    The second-order elastic constants and stress-strain coefficients are defined, respectively, as the second derivatives of the total energy and the first derivative of the stress with respect to strain. Since the Lagrangian and infinitesimal strain are commonly used in the two definitions above, the second-order elastic constants and stress-strain coefficients are separated into two categories, respectively. In general, any of the four physical quantities is employed to characterize the elastic properties of materials without differentiation. Nevertheless, differences may exist among them at non-zero pressures, especially high pressures. Having explored the confusing issue systemically in the present work, we find thatmore » the four quantities are indeed different from each other at high pressures and these differences depend on the initial stress applied on materials. Moreover, the various relations between the four quantities depicting elastic properties of materials and high-pressure sound velocities are also derived from the elastic wave equations. As examples, we calculated the high-pressure sound velocities of cubic tantalum and hexagonal rhenium using these nexus. The excellent agreement of our results with available experimental data suggests the general applicability of the relations.« less

  2. Sounding the Alert: Designing an Effective Voice for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Burkett, E. R.; Given, D. D.

    2015-12-01

    The USGS is working with partners to develop the ShakeAlert Earthquake Early Warning (EEW) system (http://pubs.usgs.gov/fs/2014/3083/) to protect life and property along the U.S. West Coast, where the highest national seismic hazard is concentrated. EEW sends an alert that shaking from an earthquake is on its way (in seconds to tens of seconds) to allow recipients or automated systems to take appropriate actions at their location to protect themselves and/or sensitive equipment. ShakeAlert is transitioning toward a production prototype phase in which test users might begin testing applications of the technology. While a subset of uses will be automated (e.g., opening fire house doors), other applications will alert individuals by radio or cellphone notifications and require behavioral decisions to protect themselves (e.g., "Drop, Cover, Hold On"). The project needs to select and move forward with a consistent alert sound to be widely and quickly recognized as an earthquake alert. In this study we combine EEW science and capabilities with an understanding of human behavior from the social and psychological sciences to provide insight toward the design of effective sounds to help best motivate proper action by alert recipients. We present a review of existing research and literature, compiled as considerations and recommendations for alert sound characteristics optimized for EEW. We do not yet address wording of an audible message about the earthquake (e.g., intensity and timing until arrival of shaking or possible actions), although it will be a future component to accompany the sound. We consider pitch(es), loudness, rhythm, tempo, duration, and harmony. Important behavioral responses to sound to take into account include that people respond to discordant sounds with anxiety, can be calmed by harmony and softness, and are innately alerted by loud and abrupt sounds, although levels high enough to be auditory stressors can negatively impact human judgment.

  3. The Impact of Multisensory Instruction on Learning Letter Names and Sounds, Word Reading, and Spelling

    ERIC Educational Resources Information Center

    Schlesinger, Nora W.; Gray, Shelley

    2017-01-01

    The purpose of this study was to investigate whether the use of simultaneous multisensory structured language instruction promoted better letter name and sound production, word reading, and word spelling for second grade children with typical development (N = 6) or with dyslexia (N = 5) than structured language instruction alone. The use of…

  4. Communication Sciences Laboratory Quarterly Progress Report, Volume 9, Number 3: Research Programs of Some of the Newer Members of CSL.

    ERIC Educational Resources Information Center

    Feinstein, Stephen H.; And Others

    The research reported in these papers covers a variety of communication problems. The first paper covers research on sound navigation by the blind and involves echo perception research and relevant aspects of underwater sound localization. The second paper describes a research program in acoustic phonetics and concerns such related issues as…

  5. Playthings as Art Objects: Ideas and Resources. Kites and Sound Making Objects and Playing Cards and Dolls.

    ERIC Educational Resources Information Center

    City of Birmingham Polytechnic (England). Dept. of Art.

    Five booklets focusing on playthings as art objects draw together information about historical, ethnographic, and play traditions of various cultures of the world. The first booklet provides an overview of ideas and resources about kites, sound making objects, playing cards, and dolls. The second booklet on kites discusses the distribution and…

  6. 78 FR 73997 - Airworthiness Directives; Various Aircraft Equipped with Wing Lift Struts

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-10

    ... wedge made of 4340 steel (or similar steel with equivalent sound velocity) or at least three shim... procedure is used to set the sound velocity. 6. Obtain a step wedge or steel shims per item 3 of the... width so that the gate is triggered by the second backwall reflection of the thick section. If the...

  7. Difficulty in Learning Similar-Sounding Words: A Developmental Stage or a General Property of Learning?

    ERIC Educational Resources Information Center

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning…

  8. Contrast of Hemispheric Lateralization for Oro-Facial Movements between Learned Attention-Getting Sounds and Species-Typical Vocalizations in Chimpanzees: Extension in a Second Colony

    ERIC Educational Resources Information Center

    Wallez, Catherine; Schaeffer, Jennifer; Meguerditchian, Adrien; Vauclair, Jacques; Schapiro, Steven J.; Hopkins, William D.

    2012-01-01

    Studies involving oro-facial asymmetries in nonhuman primates have largely demonstrated a right hemispheric dominance for communicative signals and conveyance of emotional information. A recent study on chimpanzee reported the first evidence of significant left-hemispheric dominance when using attention-getting sounds and rightward bias for…

  9. Thinking Aloud about L2 Decoding: An Exploration of the Strategies Used by Beginner Learners when Pronouncing Unfamiliar French Words

    ERIC Educational Resources Information Center

    Woore, Robert

    2010-01-01

    "Decoding"--converting the written symbols (or graphemes) of an alphabetical writing system into the sounds (or phonemes) they represent, using knowledge of the language's symbol/sound correspondences--has been argued to be an important but neglected skill in the teaching of second language (L2) French in English secondary schools.…

  10. Characterization of swallowing sounds with the use of sonar Doppler in full-term and preterm newborns.

    PubMed

    Lagos, Hellen Nataly Correia; Santos, Rosane Sampaio; Abdulmassih, Edna Marcia da Silva; Gallinea, Liliane Friedrich; Langone, Mariangela

    2013-10-01

    Introduction Technological advances have provided a large variety of instruments to view the swallowing event, aiding in the evaluation, diagnosis, and monitoring of disturbances. These advances include electromyography of the surface, dynamic video fluoroscopy, and most recently sonar Doppler. Objective To characterize swallowing sounds in typical children through the use of sonar Doppler. Method Thirty newborns participated in this prospective study. All newborns received breast milk through either their mother's breasts or bottles during data collection. The newborns were placed in either right lateral or left lateral positions when given breast milk through their mother's breasts and in a sitting position when given a bottle. There were five variables measured: initial frequency of sound wave (FoI), frequency of the first peak of the sound wave (FoP1), frequency of the second peak of the sound wave (FoP2), initial intensity and final sound wave (II and IF), and swallowing length (T), the time elapsed from the beginning until the end of the analyzed acoustic signal measured by the audio signal, in seconds. Results The values obtained in the initial frequency of the babies had a mean of 850 Hz. In terms of frequency of first peak, only three presented with a subtle peak, which was due to the elevated larynx position. Conclusion The use of sonar Doppler as a complementary exam for clinical evaluations is of upmost importance because it is nonintrusive and painless, and it is not necessary to place patients in a special room or expose them to radiation.

  11. Characterization of Swallowing Sounds with the Use of Sonar Doppler in Full-Term and Preterm Newborns

    PubMed Central

    Lagos, Hellen Nataly Correia; Santos, Rosane Sampaio; Abdulmassih, Edna Marcia da Silva; Gallinea, Liliane Friedrich; Langone, Mariangela

    2013-01-01

    Introduction Technological advances have provided a large variety of instruments to view the swallowing event, aiding in the evaluation, diagnosis, and monitoring of disturbances. These advances include electromyography of the surface, dynamic video fluoroscopy, and most recently sonar Doppler. Objective To characterize swallowing sounds in typical children through the use of sonar Doppler. Method Thirty newborns participated in this prospective study. All newborns received breast milk through either their mother's breasts or bottles during data collection. The newborns were placed in either right lateral or left lateral positions when given breast milk through their mother's breasts and in a sitting position when given a bottle. There were five variables measured: initial frequency of sound wave (FoI), frequency of the first peak of the sound wave (FoP1), frequency of the second peak of the sound wave (FoP2), initial intensity and final sound wave (II and IF), and swallowing length (T), the time elapsed from the beginning until the end of the analyzed acoustic signal measured by the audio signal, in seconds. Results The values obtained in the initial frequency of the babies had a mean of 850 Hz. In terms of frequency of first peak, only three presented with a subtle peak, which was due to the elevated larynx position. Conclusion The use of sonar Doppler as a complementary exam for clinical evaluations is of upmost importance because it is nonintrusive and painless, and it is not necessary to place patients in a special room or expose them to radiation. PMID:25992041

  12. Early lexical and phonological acquisition and its relationships.

    PubMed

    Wiethan, Fernanda Marafiga; Nóro, Letícia Arruda; Mota, Helena Bolli

    2014-01-01

    Verifying likely relationships between lexical and phonological development of children aged between 1 year to 1 year, 11 months and 29 days, who were enrolled in public kindergarten schools of Santa Maria (RS). The sample consisted of 18 children of both genders, with typical language development and aged between 1 year to 1 year, 11 months and 29 days, separated in three age subgroups. Visual recordings of spontaneous speech of each child were collected and then lexical analysis regarding the types of the said lexical items and phonological assessment were performed. The number of sounds acquired and partially acquired were counted together, and the 19 sounds and two all phones of Brazilian Portuguese were considered. To the statistical analysis, the tests of Kruskal-Wallis and Wilcoxon were used, with significance level of prelace_LT0.05. When compared the means relating to the acquired sounds and mean of the acquired and partially acquired sounds percentages, there was difference between the first and the second age subgroup, and between the first and the third subgroup. In the comparison of the said lexical items means among the age subgroups, there was difference between the first and the second subgroup, and between the first and the third subgroup again. In the comparison between the said lexical items and acquired and partially acquired sounds in each age subgroup, there was difference only in the age subgroup of 1 year and 8 months to 1 year, 11 months and 29 days, in which the sounds highlighted. The phonological and lexical domains develop as a growing process and influence each other. The Phonology has a little advantage.

  13. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  14. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    PubMed

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  15. DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS

    PubMed Central

    Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.

    2014-01-01

    We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757

  16. Effect of additional warning sounds on pedestrians' detection of electric vehicles: An ecological approach.

    PubMed

    Fleury, Sylvain; Jamet, Éric; Roussarie, Vincent; Bosc, Laure; Chamard, Jean-Christophe

    2016-12-01

    Virtually silent electric vehicles (EVs) may pose a risk for pedestrians. This paper describes two studies that were conducted to assess the influence of different types of external sounds on EV detectability. In the first study, blindfolded participants had to detect an approaching EV with either no warning sounds at all or one of three types of sound we tested. In the second study, designed to replicate the results of the first one in an ecological setting, the EV was driven along a road and the experimenters counted the number of people who turned their heads in its direction. Results of the first study showed that adding external sounds improve EV detection, and modulating the frequency and increasing the pitch of these sounds makes them more effective. This improvement was confirmed in the ecological context. Consequently, pitch variation and frequency modulation should both be taken into account in future AVAS design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The influence of the level formants on the perception of synthetic vowel sounds

    NASA Astrophysics Data System (ADS)

    Kubzdela, Henryk; Owsianny, Mariuz

    A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.

  18. Atmospheric sound propagation

    NASA Technical Reports Server (NTRS)

    Cook, R. K.

    1969-01-01

    The propagation of sound waves at infrasonic frequencies (oscillation periods 1.0 - 1000 seconds) in the atmosphere is being studied by a network of seven stations separated geographically by distances of the order of thousands of kilometers. The stations measure the following characteristics of infrasonic waves: (1) the amplitude and waveform of the incident sound pressure, (2) the direction of propagation of the wave, (3) the horizontal phase velocity, and (4) the distribution of sound wave energy at various frequencies of oscillation. Some infrasonic sources which were identified and studied include the aurora borealis, tornadoes, volcanos, gravity waves on the oceans, earthquakes, and atmospheric instability waves caused by winds at the tropopause. Waves of unknown origin seem to radiate from several geographical locations, including one in the Argentine.

  19. Audiovisual Delay as a Novel Cue to Visual Distance.

    PubMed

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.

  20. General analytical approach for sound transmission loss analysis through a thick metamaterial plate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oudich, Mourad; Zhou, Xiaoming; Badreddine Assouar, M., E-mail: Badreddine.Assouar@univ-lorraine.fr

    We report theoretically and numerically on the sound transmission loss performance through a thick plate-type acoustic metamaterial made of spring-mass resonators attached to the surface of a homogeneous elastic plate. Two general analytical approaches based on plane wave expansion were developed to calculate both the sound transmission loss through the metamaterial plate (thick and thin) and its band structure. The first one can be applied to thick plate systems to study the sound transmission for any normal or oblique incident sound pressure. The second approach gives the metamaterial dispersion behavior to describe the vibrational motions of the plate, which helpsmore » to understand the physics behind sound radiation through air by the structure. Computed results show that high sound transmission loss up to 72 dB at 2 kHz is reached with a thick metamaterial plate while only 23 dB can be obtained for a simple homogeneous plate with the same thickness. Such plate-type acoustic metamaterial can be a very effective solution for high performance sound insulation and structural vibration shielding in the very low-frequency range.« less

  1. Sound suppression mixer

    NASA Technical Reports Server (NTRS)

    Brown, William H. (Inventor)

    1994-01-01

    A gas turbine engine flow mixer includes at least one chute having first and second spaced apart sidewalls joined together at a leading edge, with the sidewalls having first and second trailing edges defining therebetween a chute outlet. The first trailing edge is spaced longitudinally downstream from the second trailing edge for defining a septum in the first sidewall extending downstream from the second trailing edge. The septum includes a plurality of noise attenuating apertures.

  2. Underwater auditory localization by a swimming harbor seal (Phoca vitulina).

    PubMed

    Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido

    2006-09-01

    The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.

  3. Making Ultraviolet Spectro-Polarimetry Polarization Measurements with the MSFC Solar Ultraviolet Magnetograph Sounding Rocket

    NASA Technical Reports Server (NTRS)

    West, Edward; Cirtain, Jonathan; Kobayashi, Ken; Davis, John; Gary, Allen

    2011-01-01

    This paper will describe the Marshall Space Flight Center's Solar Ultraviolet Magnetograph Investigation (SUMI) sounding rocket program. This paper will concentrate on SUMI's VUV optics, and discuss their spectral, spatial and polarization characteristics. While SUMI's first flight (7/30/2010) met all of its mission success criteria, there are several areas that will be improved for its second and third flights. This paper will emphasize the MgII linear polarization measurements and describe the changes that will be made to the sounding rocket and how those changes will improve the scientific data acquired by SUMI.

  4. 46 CFR 38.05-2 - Design and construction of cargo tanks-general-TB/ALL.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ocean; Great Lakes; lakes, bays, and sounds; or coastwise service shall be designed to withstand...° half amplitude (24°) in 7 seconds. (3) Heaving L/80′ half amplitude (L/20′) in 8 seconds. (e) Cargo...

  5. 46 CFR 38.05-2 - Design and construction of cargo tanks-general-TB/ALL.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ocean; Great Lakes; lakes, bays, and sounds; or coastwise service shall be designed to withstand...° half amplitude (24°) in 7 seconds. (3) Heaving L/80′ half amplitude (L/20′) in 8 seconds. (e) Cargo...

  6. 46 CFR 38.05-2 - Design and construction of cargo tanks-general-TB/ALL.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ocean; Great Lakes; lakes, bays, and sounds; or coastwise service shall be designed to withstand...° half amplitude (24°) in 7 seconds. (3) Heaving L/80′ half amplitude (L/20′) in 8 seconds. (e) Cargo...

  7. Narrative Ability of Children with Speech Sound Disorders and the Prediction of Later Literacy Skills

    ERIC Educational Resources Information Center

    Wellman, Rachel L.; Lewis, Barbara A.; Freebairn, Lisa A.; Avrich, Allison A.; Hansen, Amy J.; Stein, Catherine M.

    2011-01-01

    Purpose: The main purpose of this study was to examine how children with isolated speech sound disorders (SSDs; n = 20), children with combined SSDs and language impairment (LI; n = 20), and typically developing children (n = 20), ages 3;3 (years;months) to 6;6, differ in narrative ability. The second purpose was to determine if early narrative…

  8. Students' Constitutional Right to a Sound Basic Education: New York State's Unfinished Agenda. Part 2. Filling the Regulatory Gaps

    ERIC Educational Resources Information Center

    Rebell, Michael A.; Wolff, Jessica R.; Rogers, Joseph R., Jr.; Saleh, Matthew

    2016-01-01

    This is the second in a series of reports that are the culmination of two years of research by the Campaign for Educational Equity, a policy and research center at Teachers College, Columbia University, and significant input from the Safeguarding Sound Basic Education Task Force, a statewide group made up of representatives from New York's leading…

  9. Speed of Sound in Metal Pipes: An Inexpensive Lab

    ERIC Educational Resources Information Center

    Huggins, Elisha

    2008-01-01

    Our favorite demonstration for sound waves is to set up a compressional pulse on a horizontally stretched Slinky[TM]. One can easily watch the pulse move back and forth at a speed of the order of one meter per second. Watching this demonstration, it occurred to us that the same thing might happen in a steel pipe if you hit the end of the pipe with…

  10. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  11. Comparing Average Levels and Peak Occurrence of Overnight Sound in the Medical Intensive Care Unit on A-weighted and C-weighted Decibel Scales

    PubMed Central

    Knauert, Melissa; Jeon, Sangchoon; Murphy, Terrence E.; Yaggi, H. Klar; Pisani, Margaret A.; Redeker, Nancy S.

    2016-01-01

    Purpose Sound levels in the intensive care unit (ICU) are universally elevated and are believed to contribute to sleep and circadian disruption. The purpose of this study is to compare overnight ICU sound levels and peak occurrence on A- versus C-weighted scales. Materials and Methods This was a prospective observational study of overnight sound levels in 59 medical ICU patient rooms. Sound level was recorded every 10 seconds on A- and C-weighted decibel scales. Equivalent sound level (Leq) and sound peaks were reported for full and partial night periods. Results The overnight A-weighted Leq of 53.6 dBA was well above World Health Organization (WHO) recommendations; overnight C-weighted Leq was 63.1 dBC (no WHO recommendations). Peak sound occurrence ranged from 1.8 to 23.3 times per hour. Illness severity, mechanical ventilation and delirium were not associated with Leq or peak occurrence. Leq and peak measures for A- and C-weighted decibel scales were significantly different from each other. Conclusions Sound levels in the medical ICU are high throughout the night. Patient factors were not associated with Leq or peak occurrence. Significant discordance between A- and C-weighted values suggests that low frequency sound is a meaningful factor in the medical ICU environment. PMID:27546739

  12. Investigation of coaxial jet noise and inlet choking using an F-111A airplane

    NASA Technical Reports Server (NTRS)

    Putnam, T. W.

    1973-01-01

    Measurements of engine noise generated by an F-111A airplane positioned on a thrustmeasuring platform were made at angles of 0 deg to 160 deg from the aircraft heading. Sound power levels, power spectra, and directivity patterns are presented for jet exit velocities between 260 feet per second and 2400 feet per second. The test results indicate that the total acoustic power was proportional to the eighth power of the core jet velocity for core exhaust velocities greater than 300 meters per second (985 feet per second) and that little or no mixing of the core and fan streams occurred. The maximum sideline noise was most accurately predicted by using the average jet velocity for velocities above 300 meters per second (985 feet per second). The acoustic power spectrum was essentially the same for the single jet flow of afterburner operation and the coaxial flow of the nonafterburning condition. By varying the inlet geometry and cowl position, reductions in the sound pressure level of the blade passing frequency on the order of 15 decibels to 25 decibels were observed for inlet Mach numbers of 0.8 to 0.9.

  13. Two-dimensional adaptation in the auditory forebrain

    PubMed Central

    Nagel, Katherine I.; Doupe, Allison J.

    2011-01-01

    Sensory neurons exhibit two universal properties: sensitivity to multiple stimulus dimensions, and adaptation to stimulus statistics. How adaptation affects encoding along primary dimensions is well characterized for most sensory pathways, but if and how it affects secondary dimensions is less clear. We studied these effects for neurons in the avian equivalent of primary auditory cortex, responding to temporally modulated sounds. We showed that the firing rate of single neurons in field L was affected by at least two components of the time-varying sound log-amplitude. When overall sound amplitude was low, neural responses were based on nonlinear combinations of the mean log-amplitude and its rate of change (first time differential). At high mean sound amplitude, the two relevant stimulus features became the first and second time derivatives of the sound log-amplitude. Thus a strikingly systematic relationship between dimensions was conserved across changes in stimulus intensity, whereby one of the relevant dimensions approximated the time differential of the other dimension. In contrast to stimulus mean, increases in stimulus variance did not change relevant dimensions, but selectively increased the contribution of the second dimension to neural firing, illustrating a new adaptive behavior enabled by multidimensional encoding. Finally, we demonstrated theoretically that inclusion of time differentials as additional stimulus features, as seen so prominently in the single-neuron responses studied here, is a useful strategy for encoding naturalistic stimuli, because it can lower the necessary sampling rate while maintaining the robustness of stimulus reconstruction to correlated noise. PMID:21753019

  14. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs

    PubMed Central

    Ponnath, Abhilash; Farris, Hamilton E.

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3–10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene. PMID:25120437

  15. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    PubMed

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  16. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  17. Development of the low-cost multi-channel analyzer system for γ-ray spectroscopy with a PC sound card

    NASA Astrophysics Data System (ADS)

    Sugihara, Kenkoh; Nakamura, Satoshi N.; Chiga, Nobuyuki; Fujii, Yuu; Tamura, Hirokazu

    2013-10-01

    A low-cost multi-channel analyzer (MCA) system was developed using a custom-build interface circuit and a PC sound card. The performance of the system was studied using γ-ray spectroscopy measurements with a NaI(Tl) scintillation detector. Our system successfully measured the energy of γ-rays at a rate of 1000 counts per second (cps).

  18. Prediction of truly random future events using analysis of prestimulus electroencephalographic data

    NASA Astrophysics Data System (ADS)

    Baumgart, Stephen L.; Franklin, Michael S.; Jimbo, Hiroumi K.; Su, Sharon J.; Schooler, Jonathan

    2017-05-01

    Our hypothesis is that pre-stimulus physiological data can be used to predict truly random events tied to perceptual stimuli (e.g., lights and sounds). Our experiment presents light and sound stimuli to a passive human subject while recording electrocortical potentials using a 32-channel Electroencephalography (EEG) system. For every trial a quantum random number generator (qRNG) chooses from three possible selections with equal probability: a light stimulus, a sound stimulus, and no stimulus. Time epochs are defined preceding and post-ceding each stimulus for which mean average potentials were computed across all trials for the three possible stimulus types. Data from three regions of the brain are examined. In all three regions mean potential for light stimuli was generally enhanced relative to baseline during the period starting approximately 2 seconds before the stimulus. For sound stimuli, mean potential decreased relative to baseline during the period starting approximately 2 seconds before the stimulus. These changes from baseline may indicated the presence of evoked potentials arising from the stimulus. A P200 peak was observed in data recorded from frontal electrodes. The P200 is a well-known potential arising from the brain's processing of visual stimuli and its presence represents a replication of a known neurological phenomenon.

  19. Second Annual Career Guidance Institute: Final Report.

    ERIC Educational Resources Information Center

    Schenck, Norma Elaine

    The document reports on the organization and implementation plans for Indiana's Second Annual Career Guidance Institute and the sound/slide programs developed on six career cluster areas. An extensive evaluation analyzes the Institute in light of its objectives, offers insights gained on career opportunities, gives changes in attitude regarding…

  20. Nonlinear acoustics in cicada mating calls enhance sound propagation.

    PubMed

    Hughes, Derke R; Nuttall, Albert H; Katz, Richard A; Carter, G Clifford

    2009-02-01

    An analysis of cicada mating calls, measured in field experiments, indicates that the very high levels of acoustic energy radiated by this relatively small insect are mainly attributed to the nonlinear characteristics of the signal. The cicada emits one of the loudest sounds in all of the insect population with a sound production system occupying a physical space typically less than 3 cc. The sounds made by tymbals are amplified by the hollow abdomen, functioning as a tuned resonator, but models of the signal based solely on linear techniques do not fully account for a sound radiation capability that is so disproportionate to the insect's size. The nonlinear behavior of the cicada signal is demonstrated by combining the mutual information and surrogate data techniques; the results obtained indicate decorrelation when the phase-randomized and non-phase-randomized data separate. The Volterra expansion technique is used to fit the nonlinearity in the insect's call. The second-order Volterra estimate provides further evidence that the cicada mating calls are dominated by nonlinear characteristics and also suggests that the medium contributes to the cicada's efficient sound propagation. Application of the same principles has the potential to improve radiated sound levels for sonar applications.

  1. NESSTI: Norms for Environmental Sound Stimuli

    PubMed Central

    Hocking, Julia; Dzafic, Ilvana; Kazovsky, Maria; Copland, David A.

    2013-01-01

    In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies. PMID:24023866

  2. Scanning silence: mental imagery of complex sounds.

    PubMed

    Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz

    2005-07-15

    In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.

  3. Exploratory investigation of sound pressure level in the wake of an oscillating airfoil in the vicinity of stall

    NASA Technical Reports Server (NTRS)

    Gray, R. B.; Pierce, G. A.

    1972-01-01

    Wind tunnel tests were performed on two oscillating two-dimensional lifting surfaces. The first of these models had an NACA 0012 airfoil section while the second simulated the classical flat plate. Both of these models had a mean angle of attack of 12 degrees while being oscillated in pitch about their midchord with a double amplitude of 6 degrees. Wake surveys of sound pressure level were made over a frequency range from 16 to 32 Hz and at various free stream velocities up to 100 ft/sec. The sound pressure level spectrum indicated significant peaks in sound intensity at the oscillation frequency and its first harmonic near the wake of both models. From a comparison of these data with that of a sound level meter, it is concluded that most of the sound intensity is contained within these peaks and no appreciable peaks occur at higher harmonics. It is concluded that within the wake the sound intensity is largely pseudosound while at one chord length outside the wake, it is largely true vortex sound. For both the airfoil and flat plate the peaks appear to be more strongly dependent upon the airspeed than on the oscillation frequency. Therefore reduced frequency does not appear to be a significant parameter in the generation of wake sound intensity.

  4. Anticipated Effectiveness of Active Noise Control in Propeller Aircraft Interiors as Determined by Sound Quality Tests

    NASA Technical Reports Server (NTRS)

    Powell, Clemans A.; Sullivan, Brenda M.

    2004-01-01

    Two experiments were conducted, using sound quality engineering practices, to determine the subjective effectiveness of hypothetical active noise control systems in a range of propeller aircraft. The two tests differed by the type of judgments made by the subjects: pair comparisons in the first test and numerical category scaling in the second. Although the results of the two tests were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference.

  5. Pulse-echo sound speed estimation using second order speckle statistics

    NASA Astrophysics Data System (ADS)

    Rosado-Mendez, Ivan M.; Nam, Kibo; Madsen, Ernest L.; Hall, Timothy J.; Zagzebski, James A.

    2012-10-01

    This work presents a phantom-based evaluation of a method for estimating soft-tissue speeds of sound using pulse-echo data. The method is based on the improvement of image sharpness as the sound speed value assumed during beamforming is systematically matched to the tissue sound speed. The novelty of this work is the quantitative assessment of image sharpness by measuring the resolution cell size from the autocovariance matrix for echo signals from a random distribution of scatterers thus eliminating the need of strong reflectors. Envelope data were obtained from a fatty-tissue mimicking (FTM) phantom (sound speed = 1452 m/s) and a nonfatty-tissue mimicking (NFTM) phantom (1544 m/s) scanned with a linear array transducer on a clinical ultrasound system. Dependence on pulse characteristics was tested by varying the pulse frequency and amplitude. On average, sound speed estimation errors were -0.7% for the FTM phantom and -1.1% for the NFTM phantom. In general, no significant difference was found among errors from different pulse frequencies and amplitudes. The method is currently being optimized for the differentiation of diffuse liver diseases.

  6. Photoacoustics and speed-of-sound dual mode imaging with a long depth-of-field by using annular ultrasound array.

    PubMed

    Ding, Qiuning; Tao, Chao; Liu, Xiaojun

    2017-03-20

    Speed-of-sound and optical absorption reflect the structure and function of tissues from different aspects. A dual-mode microscopy system based on a concentric annular ultrasound array is proposed to simultaneously acquire the long depth-of-field images of speed-of-sound and optical absorption of inhomogeneous samples. First, speed-of-sound is decoded from the signal delay between each element of the annular array. The measured speed-of-sound could not only be used as an image contrast, but also improve the resolution and accuracy of spatial location of photoacoustic image in inhomogeneous acoustic media. Secondly, benefitting from dynamic focusing of annular array and the measured speed-of-sound, it is achieved an advanced acoustic-resolution photoacoustic microscopy with a precise position and a long depth-of-field. The performance of the dual-mode imaging system has been experimentally examined by using a custom-made annular array. The proposed dual-mode microscopy might have the significances in monitoring the biological physiological and pathological processes.

  7. Red Sea Outflow Experiment (REDSOX): DLD2 RAFOS Float Data Report February 2001 - March 2003

    DTIC Science & Technology

    2005-01-01

    1 2. Description of the DLD2 Float and Dual-Release System ................................................................... 2 3. Sound Sources...processing are described in detail. 2. Description of the DLD2 Float and Dual-Release System The DLD2 is a second-generation RAFOS (Ranging And Fixing Of...Sound) float with several improvements over the traditional RAFOS float (see Rossby et al., 1986, for a complete description of the RAFOS system ). A

  8. Qualities of Single Electrode Stimulation as a Function of Rate and Place of Stimulation with a Cochlear Implant

    PubMed Central

    Landsberger, David M.; Vermeire, Katrien; Claes, Annes; Van Rompaey, Vincent; Van de Heyning, Paul

    2015-01-01

    Objectives Although it has been previously shown that changes in temporal coding produce changes in pitch in all cochlear regions, research has suggested that temporal coding might be best encoded in relatively apical locations. We hypothesized that although temporal coding may provide useable information at any cochlear location, low rates of stimulation might provide better sound quality in apical regions that are more likely to encode temporal information in the normal ear. In the present study, sound qualities of single electrode pulse trains were scaled to provide insight into the combined effects of cochlear location and stimulation rate on sound quality. Design Ten long term users of MED-EL cochlear implants with 31 mm electrode arrays (Standard or FLEXSOFT) were asked to scale the sound quality of single electrode pulse trains in terms of how “Clean”, “Noisy”, “High”, and “Annoying” they sounded. Pulse trains were presented on most electrodes between 1 and 12 representing the entire range of the long electrode array at stimulation rates of 100, 150, 200, 400, or 1500 pulses per second. Results While high rates of stimulation are scaled as having a “Clean” sound quality across the entire array, only the most apical electrodes (typically 1 through 3) were considered “Clean” at low rates. Low rates on electrodes 6 through 12 were not rated as “Clean” while the low rate quality of electrodes 4 and 5 were typically in between. Scaling of “Noisy” responses provided an approximately inverse pattern as “Clean” responses. “High” responses show the trade-off between rate and place of stimulation on pitch. Because “High” responses did not correlate with “Clean” responses, subjects were not rating sound quality based on pitch. Conclusions If explicit temporal coding is to be provided in a cochlear implant, it is likely to sound better when provided apically. Additionally, the finding that low rates sound clean only at apical places of stimulation is consistent with previous findings that a change in rate of stimulation corresponds to an equivalent change in perceived pitch at apical locations. Collectively, the data strongly suggests that temporal coding with a cochlear implant is optimally provided by electrodes placed well into the second cochlear turn. PMID:26583480

  9. Earth Observation

    NASA Image and Video Library

    2014-08-08

    ISS040-E-089959 (8 Aug. 2014) --- King Sound on the northwest coast of Australia is featured in this image photographed by an Expedition 40 crew member on the International Space Station. The Fitzroy River, one of Australia's largest, empties into the Sound, a large gulf in Western Australia (approximately 120 kilometers long). King Sound has the highest tides in Australia, in the range of 11-12 meters, the second highest in the world after the Bay of Fundy on the east coast of North America. The strong brown smudge at the head of the Sound contrasts with the clearer blue water along the rest of the coast. This is mud stirred up by the tides and also supplied by the Fitzroy River. The bright reflection point of the sun obscures the blue water of the Indian Ocean (top left). Just to the west of the Sound, thick plumes of wildfire smoke, driven by northeast winds, obscure the coastline. A wide field of “popcorn cumulus” clouds (right) is a common effect of daily heating of the ground surface.

  10. Experiments on the mechanism of underwater hearing.

    PubMed

    Pau, Hans Wilhelm; Warkentin, Mareike; Specht, Olaf; Krentz, Helga; Herrmann, Anne; Ehrt, Karsten

    2011-12-01

    The findings suggest that underwater sound perception is realized by the middle ear rather than by bone conduction, at least in shallow water conditions. To prove whether underwater sound perception is effected by bone conduction or by conduction via the middle ear. Five divers, breathing through snorkels, were tested in a swimming pool, to determine whether a sound was louder when the acoustic source placed was in front of the head in comparison with a lateral application facing the ear region. The second experiment investigated whether sound perception is influenced by ear protection plugs in underwater conditions. Also, the effect of a 5 mm thick neoprene hood was determined, with and without an additional perforation in the ear region. Sounds were louder when applied from a position laterally facing the ear, louder without than with a protection plug, louder without than with a neoprene hood on, and louder when the neoprene hood had a perforation in the region of the ear than with an intact hood.

  11. Acquisition of Japanese contracted sounds in L1 phonology

    NASA Astrophysics Data System (ADS)

    Tsurutani, Chiharu

    2002-05-01

    Japanese possesses a group of palatalized consonants, known to Japanese scholars as the contracted sounds, [CjV]. English learners of Japanese appear to treat them initially as consonant + glide clusters, where there is an equivalent [Cj] cluster in English, or otherwise tend to insert an epenthetic vowel [CVjV]. The acquisition of the Japanese contracted sounds by first language (L1) learners has not been widely studied compared with the consonant clusters in English with which they bear a close phonetic resemblance but have quite a different phonological status. This is a study to investigate the L1 acquisition process of the Japanese contracted sounds (a) in order to observe how the palatalization gesture is acquired in Japanese and (b) to investigate differences in the sound acquisition processes of first and second language (L2) learners: Japanese children compared with English learners. To do this, the productions of Japanese children ranging in age from 2.5 to 3.5 years were transcribed and the pattern of misproduction was observed.

  12. Integrating Articulatory Constraints into Models of Second Language Phonological Acquisition

    ERIC Educational Resources Information Center

    Colantoni, Laura; Steele, Jeffrey

    2008-01-01

    Models such as Eckman's markedness differential hypothesis, Flege's speech learning model, and Brown's feature-based theory of perception seek to explain and predict the relative difficulty second language (L2) learners face when acquiring new or similar sounds. In this paper, we test their predictive adequacy as concerns native English speakers'…

  13. An Investigation of Selected Readiness Variables As Predictors of Reading Achievement at Second Grade Level.

    ERIC Educational Resources Information Center

    Seals, Caryl Neman

    This study was designed to determine the relationship of selected readiness variables to achievement in reading at the second grade level. The readiness variables were environment, mathematics, letters and sounds, aural comprehension, visual perception, auditory perception, vocabulary and concepts, word meaning, listening, matching, alphabet,…

  14. 78 FR 78311 - Approval and Promulgation of Implementation Plans; Washington: Kent, Seattle, and Tacoma Second...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-26

    ... Amendments. The Washington Department of Ecology (Ecology) and the Puget Sound Clean Air Agency (PSCAA... international pollution. The second comment requested that Ecology expand the Kent maintenance area boundary and... determined that Ecology's responses were appropriate and adequate. This SIP revision was submitted by the...

  15. "Pour nos petits Manitobains," Exposure Package for Grade 2 Basic/Conversational French Program.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg. Bureau of French Education.

    This guide outlines the Manitoban Department of Education's conversational French-as-a-second-language curriculum for second grade. The program is designed to introduce young children to the French language and culture through the learning of French sounds, vocabulary, and some sentence patterns. An introductory section explains the program's…

  16. LQAS: User Beware.

    PubMed

    Rhoda, Dale A; Fernandez, Soledad A; Fitch, David J; Lemeshow, Stanley

    2010-02-01

    Researchers around the world are using Lot Quality Assurance Sampling (LQAS) techniques to assess public health parameters and evaluate program outcomes. In this paper, we report that there are actually two methods being called LQAS in the world today, and that one of them is badly flawed. This paper reviews fundamental LQAS design principles, and compares and contrasts the two LQAS methods. We raise four concerns with the simply-written, freely-downloadable training materials associated with the second method. The first method is founded on sound statistical principles and is carefully designed to protect the vulnerable populations that it studies. The language used in the training materials for the second method is simple, but not at all clear, so the second method sounds very much like the first. On close inspection, however, the second method is found to promote study designs that are biased in favor of finding programmatic or intervention success, and therefore biased against the interests of the population being studied. We outline several recommendations, and issue a call for a new high standard of clarity and face validity for those who design, conduct, and report LQAS studies.

  17. Associative cueing of attention through implicit feature-location binding.

    PubMed

    Girardi, Giovanna; Nico, Daniele

    2017-09-01

    In order to assess associative learning between two task-irrelevant features in cueing spatial attention, we devised a task in which participants have to make an identity comparison between two sequential visual stimuli. Unbeknownst to them, location of the second stimulus could be predicted by the colour of the first or a concurrent sound. Albeit unnecessary to perform the identity-matching judgment the predictive features thus provided an arbitrary association favouring the spatial anticipation of the second stimulus. A significant advantage was found with faster responses at predicted compared to non-predicted locations. Results clearly demonstrated an associative cueing of attention via a second-order arbitrary feature/location association but with a substantial discrepancy depending on the sensory modality of the predictive feature. With colour as predictive feature, significant advantages emerged only after the completion of three blocks of trials. On the contrary, sound affected responses from the first block of trials and significant advantages were manifest from the beginning of the second. The possible mechanisms underlying the associative cueing of attention in both conditions are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    PubMed

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  19. Misconceptions About Sound Among Engineering Students

    NASA Astrophysics Data System (ADS)

    Pejuan, Arcadi; Bohigas, Xavier; Jaén, Xavier; Periago, Cristina

    2012-12-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the extent to which they are held (quantitative aspect). Our second objective was to explore other misconceptions about wave aspects of sound. We have also considered the degree of consistency in the model of sound used by each student. Forty students answered a questionnaire including open-ended questions. Based on their free, spontaneous answers, the main results were as follows: a large majority of students answered most of the questions regarding the microscopic model of sound according to the scientifically accepted model; however, only a small number answered consistently. The main model misconception found was the notion that sound is propagated through the travelling of air particles, even in solids. Misconceptions and mental-model inconsistencies tended to depend on the engineering programme in which the student was enrolled. However, students in general were inconsistent also in applying their model of sound to individual sound properties. The main conclusion is that our students have not truly internalised the scientifically accepted model that they have allegedly learnt. This implies a need to design learning activities that take these findings into account in order to be truly efficient.

  20. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae): first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    PubMed Central

    2012-01-01

    Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s) with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms), and never exceeded a few dozen milliseconds (18 ± 11 ms). Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of the gonads. PMID:23217241

  1. Where is the level of neutral buoyancy for deep convection?

    NASA Astrophysics Data System (ADS)

    Takahashi, Hanii; Luo, Zhengzhao

    2012-08-01

    This study revisits an old concept in meteorology - level of neutral buoyancy (LNB). The classic definition of LNB is derived from the parcel theory and can be estimated from the ambient sounding (LNB_sounding) without having to observe any actual convective cloud development. In reality, however, convection interacts with the environment in complicated ways; it will eventually manage to find its own effective LNB and manifests it through detraining masses and developing anvils (LNB_observation). This study conducts a near-global survey of LNB_observation for tropical deep convection using CloudSat data and makes comparison with the corresponding LNB_sounding. The principal findings are as follows: First, although LNB_sounding provides a reasonable upper bound for convective development, correlation between LNB_sounding and LNB_observation is low suggesting that ambient sounding contains limited information for accurately predicting the actual LNB. Second, maximum mass outflow is located more than 3 km lower than LNB_sounding. Hence, from convective transport perspective, LNB_sounding is a significant overestimate of the “destination” height level of the detrained mass. Third, LNB_observation is consistently higher over land than over ocean, although LNB_sounding is similar between land and ocean. This difference is likely related to the contrasts in convective strength and environment between land and ocean. Finally, we estimate the bulk entrainment rates associated with the observed deep convection, which can serve as an observational basis for adjusting GCM cumulus parameterization.

  2. A non-local model of fractional heat conduction in rigid bodies

    NASA Astrophysics Data System (ADS)

    Borino, G.; di Paola, M.; Zingales, M.

    2011-03-01

    In recent years several applications of fractional differential calculus have been proposed in physics, chemistry as well as in engineering fields. Fractional order integrals and derivatives extend the well-known definitions of integer-order primitives and derivatives of the ordinary differential calculus to real-order operators. Engineering applications of fractional operators spread from viscoelastic models, stochastic dynamics as well as with thermoelasticity. In this latter field one of the main actractives of fractional operators is their capability to interpolate between the heat flux and its time-rate of change, that is related to the well-known second sound effect. In other recent studies a fractional, non-local thermoelastic model has been proposed as a particular case of the non-local, integral, thermoelasticity introduced at the mid of the seventies. In this study the autors aim to introduce a different non-local model of extended irreverible thermodynamics to account for second sound effect. Long-range heat flux is defined and it involves the integral part of the spatial Marchaud fractional derivatives of the temperature field whereas the second-sound effect is accounted for introducing time-derivative of the heat flux in the transport equation. It is shown that the proposed model does not suffer of the pathological problems of non-homogenoeus boundary conditions. Moreover the proposed model coalesces with the Povstenko fractional models in unbounded domains.

  3. Laser microphone

    DOEpatents

    Veligdan, James T.

    2000-11-14

    A microphone for detecting sound pressure waves includes a laser resonator having a laser gain material aligned coaxially between a pair of first and second mirrors for producing a laser beam. A reference cell is disposed between the laser material and one of the mirrors for transmitting a reference portion of the laser beam between the mirrors. A sensing cell is disposed between the laser material and one of the mirrors, and is laterally displaced from the reference cell for transmitting a signal portion of the laser beam, with the sensing cell being open for receiving the sound waves. A photodetector is disposed in optical communication with the first mirror for receiving the laser beam, and produces an acoustic signal therefrom for the sound waves.

  4. Popcorn: critical temperature, jump and sound

    PubMed Central

    Virot, Emmanuel; Ponomarenko, Alexandre

    2015-01-01

    Popcorn bursts open, jumps and emits a ‘pop’ sound in some hundredths of a second. The physical origin of these three observations remains unclear in the literature. We show that the critical temperature 180°C at which almost all of popcorn pops is consistent with an elementary pressure vessel scenario. We observe that popcorn jumps with a ‘leg’ of starch which is compressed on the ground. As a result, popcorn is midway between two categories of moving systems: explosive plants using fracture mechanisms and jumping animals using muscles. By synchronizing video recordings with acoustic recordings, we propose that the familiar ‘pop’ sound of the popcorn is caused by the release of water vapour. PMID:25673298

  5. Experimental validation of boundary element methods for noise prediction

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Oswald, Fred B.

    1992-01-01

    Experimental validation of methods to predict radiated noise is presented. A combined finite element and boundary element model was used to predict the vibration and noise of a rectangular box excited by a mechanical shaker. The predicted noise was compared to sound power measured by the acoustic intensity method. Inaccuracies in the finite element model shifted the resonance frequencies by about 5 percent. The predicted and measured sound power levels agree within about 2.5 dB. In a second experiment, measured vibration data was used with a boundary element model to predict noise radiation from the top of an operating gearbox. The predicted and measured sound power for the gearbox agree within about 3 dB.

  6. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research

    PubMed Central

    Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers’ speech-specific capabilities, rather than the perceivers’ psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants’ ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants’ acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery. PMID:29176886

  7. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research.

    PubMed

    Lin, Yi; Fan, Ruolin; Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants' ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants' acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery.

  8. The sound of arousal in music is context-dependent

    PubMed Central

    Blumstein, Daniel T.; Bryant, Gregory A.; Kaye, Peter

    2012-01-01

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus. PMID:22696288

  9. Mommy is only happy! Dutch mothers' realisation of speech sounds in infant-directed speech expresses emotion, not didactic intent.

    PubMed

    Benders, Titia

    2013-12-01

    Exaggeration of the vowel space in infant-directed speech (IDS) is well documented for English, but not consistently replicated in other languages or for other speech-sound contrasts. A second attested, but less discussed, pattern of change in IDS is an overall rise of the formant frequencies, which may reflect an affective speaking style. The present study investigates longitudinally how Dutch mothers change their corner vowels, voiceless fricatives, and pitch when speaking to their infant at 11 and 15 months of age. In comparison to adult-directed speech (ADS), Dutch IDS has a smaller vowel space, higher second and third formant frequencies in the vowels, and a higher spectral frequency in the fricatives. The formants of the vowels and spectral frequency of the fricatives are raised more strongly for infants at 11 than at 15 months, while the pitch is more extreme in IDS to 15-month olds. These results show that enhanced positive affect is the main factor influencing Dutch mothers' realisation of speech sounds in IDS, especially to younger infants. This study provides evidence that mothers' expression of emotion in IDS can influence the realisation of speech sounds, and that the loss or gain of speech clarity may be secondary effects of affect. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The sound of arousal in music is context-dependent.

    PubMed

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  11. Developmental Changes in Locating Voice and Sound in Space

    PubMed Central

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  12. Correspondence between sound propagation in discrete and continuous random media with application to forest acoustics.

    PubMed

    Ostashev, Vladimir E; Wilson, D Keith; Muhlestein, Michael B; Attenborough, Keith

    2018-02-01

    Although sound propagation in a forest is important in several applications, there are currently no rigorous yet computationally tractable prediction methods. Due to the complexity of sound scattering in a forest, it is natural to formulate the problem stochastically. In this paper, it is demonstrated that the equations for the statistical moments of the sound field propagating in a forest have the same form as those for sound propagation in a turbulent atmosphere if the scattering properties of the two media are expressed in terms of the differential scattering and total cross sections. Using the existing theories for sound propagation in a turbulent atmosphere, this analogy enables the derivation of several results for predicting forest acoustics. In particular, the second-moment parabolic equation is formulated for the spatial correlation function of the sound field propagating above an impedance ground in a forest with micrometeorology. Effective numerical techniques for solving this equation have been developed in atmospheric acoustics. In another example, formulas are obtained that describe the effect of a forest on the interference between the direct and ground-reflected waves. The formulated correspondence between wave propagation in discrete and continuous random media can also be used in other fields of physics.

  13. Basilar membrane vibration is not involved in the reverse propagation of otoacoustic emissions

    PubMed Central

    He, W.; Ren, T.

    2013-01-01

    To understand how the inner ear-generated sound, i.e., otoacoustic emission, exits the cochlea, we created a sound source electrically in the second turn and measured basilar membrane vibrations at two longitudinal locations in the first turn in living gerbil cochleae using a laser interferometer. For a given longitudinal location, electrically evoked basilar membrane vibrations showed the same tuning and phase lag as those induced by sounds. For a given frequency, the phase measured at a basal location led that at a more apical location, indicating that either an electrical or an acoustical stimulus evoked a forward travelling wave. Under postmortem conditions, the electrically evoked emissions showed no significant change while the basilar membrane vibration nearly disappeared. The current data indicate that basilar membrane vibration was not involved in the backward propagation of otoacoustic emissions and that sounds exit the cochlea probably through alternative media, such as cochlear fluids. PMID:23695199

  14. Active/Passive Control of Sound Radiation from Panels using Constrained Layer Damping

    NASA Technical Reports Server (NTRS)

    Gibbs, Gary P.; Cabell, Randolph H.

    2003-01-01

    A hybrid passive/active noise control system utilizing constrained layer damping and model predictive feedback control is presented. This system is used to control the sound radiation of panels due to broadband disturbances. To facilitate the hybrid system design, a methodology for placement of constrained layer damping which targets selected modes based on their relative radiated sound power is developed. The placement methodology is utilized to determine two constrained layer damping configurations for experimental evaluation of a hybrid system. The first configuration targets the (4,1) panel mode which is not controllable by the piezoelectric control actuator, and the (2,3) and (5,2) panel modes. The second configuration targets the (1,1) and (3,1) modes. The experimental results demonstrate the improved reduction of radiated sound power using the hybrid passive/active control system as compared to the active control system alone.

  15. New Research on MEMS Acoustic Vector Sensors Used in Pipeline Ground Markers

    PubMed Central

    Song, Xiaopeng; Jian, Zeming; Zhang, Guojun; Liu, Mengran; Guo, Nan; Zhang, Wendong

    2015-01-01

    According to the demands of current pipeline detection systems, the above-ground marker (AGM) system based on sound detection principle has been a major development trend in pipeline technology. A novel MEMS acoustic vector sensor for AGM systems which has advantages of high sensitivity, high signal-to-noise ratio (SNR), and good low frequency performance has been put forward. Firstly, it is presented that the frequency of the detected sound signal is concentrated in a lower frequency range, and the sound attenuation is relatively low in soil. Secondly, the MEMS acoustic vector sensor structure and basic principles are introduced. Finally, experimental tests are conducted and the results show that in the range of 0°∼90°, when r = 5 m, the proposed MEMS acoustic vector sensor can effectively detect sound signals in soil. The measurement errors of all angles are less than 5°. PMID:25609046

  16. Third order harmonic imaging for biological tissues using three phase-coded pulses.

    PubMed

    Ma, Qingyu; Gong, Xiufen; Zhang, Dong

    2006-12-22

    Compared to the fundamental and the second harmonic imaging, the third harmonic imaging shows significant improvements in image quality due to the better resolution, but it is degraded by the lower sound pressure and signal-to-noise ratio (SNR). In this study, a phase-coded pulse technique is proposed to selectively enhance the sound pressure of the third harmonic by 9.5 dB whereas the fundamental and the second harmonic components are efficiently suppressed and SNR is also increased by 4.7 dB. Based on the solution of the KZK nonlinear equation, the axial and lateral beam profiles of harmonics radiated from a planar piston transducer were theoretically simulated and experimentally examined. Finally, the third harmonic images using this technique were performed for several biological tissues and compared with the images obtained by the fundamental and the second harmonic imaging. Results demonstrate that the phase-coded pulse technique yields a dramatically cleaner and sharper contrast image.

  17. An Approach for Embedding Critical Thinking in Second Language Paragraph Writing

    ERIC Educational Resources Information Center

    Chason, Lisa; Loyet, Dianne; Sorenson, Luann; Stoops, Anastasia

    2017-01-01

    Writing textbooks for English language learners frequently teach a paragraph pattern that is limited to topic sentence, support, and concluding sentence. Although beginning second language (L2) writers benefit from having a structured way to organize their ideas, as they advance, this type of writing can sound trite and uncritical. To provide a…

  18. 133. View of former oil switch breaker room (on second ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    133. View of former oil switch breaker room (on second floor, north of the control room), looking south. The oil switch breakers were replaced with vacuum switches, along the wall to the right. Photo by Jet Lowe, HAER, 1989. - Puget Sound Power & Light Company, White River Hydroelectric Project, 600 North River Avenue, Dieringer, Pierce County, WA

  19. The Effect of De-Contextualized Multimedia Software on Taiwanese College Level Students' English Vocabulary Development

    ERIC Educational Resources Information Center

    Yan, Yaw-liang

    2010-01-01

    Computer technology has been applied widely as an educational tool in second language learning for a long time. There have been many studies discussing the application of computer technology to different aspects in second language learning. However, the learning effect of both de-contextualized multimedia software and sound gloss on second…

  20. The Ecology and Silviculture of Oaks, Second Edition: A new book

    Treesearch

    Paul S. Johnson; Stephen R. Shifley; Robert Rogers

    2011-01-01

    The second edition of The Ecology and Silviculture of Oaks was recently published (Johnson and others 2009). The approach of the book is fundamentally silvicultural, but the content is based on the premise that eff ective and environmentally sound management and protection of oak forests and associated landscapes must be grounded in ecological...

  1. Cordwood volume tables for second-growth Douglas-fir.

    Treesearch

    George R. Staebler; Elmer W. Shaw

    1949-01-01

    The increasing harvest of second-growth Douglas-fir for pulpwood makes cordwood volume tables, based on the conventional measures of tree diameter and height, useful tools for the pulpwood operators and forest managers seeking to determine the merchantable contents of their stands. Recent investigations in the Puget Sound vicinity into the cubic-foot content of a cord...

  2. Short-term Second Language and Music Training Induces Lasting Functional Brain Changes in Early Childhood

    PubMed Central

    Moreno, Sylvain; Lee, Yunjo

    2014-01-01

    Immediate and lasting effects of music or second-language training were examined in early childhood using event-related potentials (ERPs). ERPs were recorded for French vowels and musical notes in a passive oddball paradigm in 36 four- to six-year-old children who received either French or music training. Following training, both groups showed enhanced late discriminative negativity (LDN) in their trained condition (music group–musical notes; French group–French vowels) and reduced LDN in the untrained condition. These changes reflect improved processing of relevant (trained) sounds, and an increased capacity to suppress irrelevant (untrained) sounds. After one year, training-induced brain changes persisted and new hemispheric changes appeared. Such results provide evidence for the lasting benefit of early intervention in young children. PMID:25346534

  3. Reconstruction of spatial distributions of sound velocity and absorption in soft biological tissues using model ultrasonic tomographic data

    NASA Astrophysics Data System (ADS)

    Burov, V. A.; Zotov, D. I.; Rumyantseva, O. D.

    2014-07-01

    A two-step algorithm is used to reconstruct the spatial distributions of the acoustic characteristics of soft biological tissues-the sound velocity and absorption coefficient. Knowing these distributions is urgent for early detection of benign and malignant neoplasms in biological tissues, primarily in the breast. At the first stage, large-scale distributions are estimated; at the second step, they are refined with a high resolution. Results of reconstruction on the base of model initial data are presented. The principal necessity of preliminary reconstruction of large-scale distributions followed by their being taken into account at the second step is illustrated. The use of CUDA technology for processing makes it possible to obtain final images of 1024 × 1024 samples in only a few minutes.

  4. Exposure and materiality of the secondary room and its impact on the impulse response of coupled-volume concert halls

    NASA Astrophysics Data System (ADS)

    Ermann, Michael; Johnson, Marty

    2005-06-01

    How does sound decay when one room is partially exposed to another (acoustically coupled)? More specifically, this research aims to quantify how operational and design decisions impact sound fields in the design of concert halls with acoustical coupling. By adding a second room to a concert hall, and designing doors to control the sonic transparency between the two rooms, designers can create a new, coupled acoustic. Concert halls use coupling to achieve a variable, longer, and distinct reverberant quality for their musicians and listeners. For this study a coupled-volume shoebox concert hall is conceived with a fixed geometric volume, form, and primary-room sound absorption. Aperture size and secondary-room sound absorption levels are established as variables. Statistical analysis of sound decay in this simulated hall suggests a highly sensitive relationship between the double-sloped condition and (1) architectural composition, as defined by the aperture size exposing the chamber and (2) materiality, as defined by the sound absorptance in the coupled volume. The theoretical, mathematical predictions are compared with coupled-volume concert hall field measurements and guidelines are suggested for future designs of coupled-volume concert halls.

  5. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    PubMed Central

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  6. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography.

    PubMed

    Ishii, Akira; Tanaka, Masaaki; Iwamae, Masayoshi; Kim, Chongsoo; Yamano, Emi; Watanabe, Yasuyoshi

    2013-06-13

    It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds with mental fatigue and that the insular cortex is involved in the neural substrates of this phenomenon.

  7. Hearing on the Reauthorization of the Higher Education Act of 1965; Sallie Mae--Safety and Soundness. Hearing before the Subcommittee on Postsecondary Education of the Committee on Education and Labor. House of Representatives, One Hundred Second Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Subcommittee on Postsecondary Education.

    As part of a series of hearings on the reauthorization of the Higher Education Act of 1965, testimony was heard on the safety and soundness of the Student Loan Marketing Association (Sallie Mae). Witnesses discussed many issues surrounding financial oversight of federal agencies and financial risk to the taxpayer through the potential failure of…

  8. Noise-induced hearing impairment and handicap

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A permanent, noise-induced hearing loss has doubly harmful effect on speech communications. First, the elevation in the threshold of hearing means that many speech sounds are too weak to be heard, and second, very intense speech sounds may appear to be distorted. The whole question of the impact of noise-induced hearing loss upon the impairments and handicaps experienced by people with such hearing losses was somewhat controversial partly because of the economic aspects of related practical noise control and workmen's compensation.

  9. Cosmic X-ray physics

    NASA Technical Reports Server (NTRS)

    Mccammon, D.; Cox, D. P.; Kraushaar, W. L.; Sanders, W. T.

    1987-01-01

    The soft X-ray sky survey data are combined with the results from the UXT sounding rocket payload. Very strong constraints can then be placed on models of the origin of the soft diffuse background. Additional observational constraints force more complicated and realistic models. Significant progress was made in the extraction of more detailed spectral information from the UXT data set. Work was begun on a second generation proportional counter response model. The first flight of the sounding rocket will have a collimator to study the diffuse background.

  10. Ionospheric Results with Sounding Rockets and the Explorer VIII Satellite (1960 )

    NASA Technical Reports Server (NTRS)

    Bourdeau, R. E.

    1961-01-01

    A review is made of ionospheric data reported since the IGY from rocket and satellite-borne ionospheric experiments. These include rocket results on electron density (RF impedance probe), D-region conductivity (Gerdien condenser), and electron temperature (Langmuir probe). Also included are data in the 1000 kilometer region on ion concentration (ion current monitor) and electron temperature from the Explorer VIII Satellite (1960 xi). The review includes suggestions for second generation experiments and combinations thereof particularly suited for small sounding rockets.

  11. Effect of Intense Sound Waves on a Stationary Gas Flame

    NASA Technical Reports Server (NTRS)

    Hahnemann, H; Ehret, L

    1950-01-01

    Intense sound waves with a resonant frequency of 5000 cycles per second were imposed on a stationary propane-air flame issuing from a nozzle. In addition to a slight increase of the flame velocity, a fundamental change both in the shape of the burning zone and in the flow pattern could be observed. An attempt is made to explain the origin of the variations in the flame configuration on the basis of transition at the nozzle from jet flow to potential flow.

  12. Hydrologic and salinity characteristics of Currituck Sound and selected tributaries in North Carolina and Virginia, 1998–99

    USGS Publications Warehouse

    Caldwell, William Scott

    2001-01-01

    Data collected at three sites in Currituck Sound and three tributary sites between March 1, 1998, and February 28, 1999, were used to describe hydrologic and salinity characteristics of Currituck Sound. Water levels and salinity were measured at West Neck Creek at Pungo and at Albemarle and Chesapeake Canal near Princess Anne in Virginia, and at Coinjock, Bell Island, Poplar Branch, and Point Harbor in North Carolina. Flow velocity also was measured at the West Neck Creek and Coinjock sites.The maximum water-level range during the study period was observed near the lower midpoint of Currituck Sound at Poplar Branch. Generally, water levels at all sites were highest during March and April, and lowest during November and December. Winds from the south typically produced higher water levels in Currituck Sound, whereas winds from the north typically produced lower water levels. Although wind over Currituck Sound is associated with fluctuations in water level within the sound, other mechanisms, such as the effects of wind on Albemarle Sound and on other water bodies south of Currituck Sound, likely affect low-frequency water-level variations in Currituck Sound.Flow in West Neck Creek ranged from 313 cubic feet per second to the south to -227 cubic feet per second to the north (negative indicates flow to the north). Flow at the Coinjock site ranged from 15,300 cubic feet per second to the south to -11,700 cubic feet per second to the north. Flow was to the south 68 percent of the time at the West Neck Creek site and 44 percent of the time at the Coinjock site. Daily flow volumes were calculated as the sum of the instantaneous flow volumes. The West Neck Creek site had a cumulative flow volume to the south of 7.69 x 108 cubic feet for the period March 1, 1998, to February 28, 1999; the Coinjock site had a cumulative flow volume to the north of -1.33 x 1010 cubic feet for the same study period.Wind direction and speed influence flow at the West Neck Creek and Coinjock sites, whereas precipitation alone has little effect on flow at these sites. Flow at the West Neck Creek site is semidiurnal but is affected by wind direction and speed. Flow to the south (positive flow) was associated with wind speeds averaging more than 15 miles per hour from the northwest; flow to the north (negative flow) was associated with wind speeds averaging more than 15 miles per hour from the south and southwest. Flow at the Coinjock site reacted in a more unpredictable manner and was not affected by winds or tides in the same manner as West Neck Creek, with few tidal characteristics evident in the record.Throughout the study period, maximum salinity exceeded 3.5 parts per thousand at all sites; however, mean and median salinities were below 3.5 parts per thousand at all sites except the Point Harbor site (3.6 and 4.2 parts per thousand, respectively) at the southern end of the sound. Salinities were less than or equal to 3.5 parts per thousand nearly 100 percent of the time at the Bell Island and Poplar Branch sites in Currituck Sound and about 86 percent of the time at the Albemarle and Chesapeake Canal site north of the sound. Salinity at the West Neck Creek and Coinjock sites was less than or equal to 3.5 parts per thousand about 82 percent of the time.During this study, prevailing winds from the north were associated with flow to the south and tended to increase salinity at the West Neck Creek and the Albemarle and Chesapeake Canal sites. Conversely, these same winds tended to decrease salinity at the other sites. Prevailing winds from the south and southwest were associated with flow to the north and tended to increase salinity at the Poplar Branch and Point Harbor sites in Currituck Sound and at the Coinjock site, but these same winds tended to decrease salinity at the West Neck Creek and the Albemarle and Chesapeake Canal sites. The greatest variations in salinity were observed at the northernmost site, West Neck Creek, and thesouthernmost site, Point Harbor. The least variation in salinity was observed at the upper midpoint of the sound at the Bell Island site.Daily salt loads were computed for 364 days at the West Neck Creek site and 348 days at the Coinjock site from March 1, 1998, to February 28, 1999. The cumulative salt load at West Neck Creek was 28,170 tons to the south, and the cumulative salt load at the Coinjock site was -872,750 tons to the north.The cumulative salt load passing the West Neck Creek site during the study period would be 0.01 part per thousand if uniformly distributed throughout the sound (approximately 489,600 acre-feet in North Carolina). If the cumulative salt load passing the Coinjock site were uniformly distributed throughout the sound, the salinity in the sound would be 0.32 part per thousand. The net transport at the West Neck Creek and Coinjock sites indicates inflow of salt into the sound. A constant inflow of freshwater from tributaries and ground-water sources also occurs; however, the net flow volumes from these freshwater sources are not documented, and the significance of these freshwater inflows toward diluting the net import of salt into the sound is beyond the scope of this study.

  13. Word learning in adults with second-language experience: effects of phonological and referent familiarity.

    PubMed

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2013-04-01

    The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar versus unfamiliar referents and whether successful word learning is associated with increased second-language experience. Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically familiar novel words (constructed using English sounds) or phonologically unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition task. A median-split procedure identified high-ability and low-ability word learners in each condition, and the two groups were compared on measures of second-language experience. Findings suggest that the ability to accurately match newly learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: in which phonologically unfamiliar novel words were paired with familiar referents. Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults.

  14. Word learning in adults with second language experience: Effects of phonological and referent familiarity

    PubMed Central

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2014-01-01

    Purpose The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar vs. unfamiliar referents, and whether successful word-learning is associated with increased second-language experience. Method Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically-familiar novel words (constructed using English sounds) or phonologically-unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition-task. A median-split procedure identified high-ability and low-ability word-learners in each condition, and the two groups were compared on measures of second-language experience. Results Findings suggest that the ability to accurately match newly-learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: Where phonologically-unfamiliar novel words were paired with familiar referents. Conclusions Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents, and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults. PMID:22992709

  15. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  16. Learning about the Dynamic Sun through Sounds

    NASA Astrophysics Data System (ADS)

    Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.

    2008-06-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.

  17. Design and evaluation of a parametric model for cardiac sounds.

    PubMed

    Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador

    2017-10-01

    Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Rising tones and rustling noises: Metaphors in gestural depictions of sounds

    PubMed Central

    Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick

    2017-01-01

    Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants’ gestures. PMID:28750071

  19. The auditory P50 component to onset and offset of sound

    PubMed Central

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi

    2008-01-01

    Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255

  20. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  1. [Sound improves distinction of low intensities of light in the visual cortex of a rabbit].

    PubMed

    Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V

    2011-01-01

    Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.

  2. Spectral analysis of /s/ sound with changing angulation of the maxillary central incisors.

    PubMed

    Runte, Christoph; Tawana, Djafar; Dirksen, Dieter; Runte, Bettina; Lamprecht-Dinnesen, Antoinette; Bollmann, Friedhelm; Seifert, Eberhard; Danesh, Gholamreza

    2002-01-01

    The aim of the study was to measure the influence of the maxillary central incisors free from adaptation phenomena using spectral analysis. The maxillary dentures of 18 subjects were duplicated. The central incisors were fixed in a pivoting appliance so that their position could be changed from labial to palatal direction. A mechanical push/pull cable enabled the incisor section to be handled extraorally. Connected to the control was a sound generator producing a sinus wave whose frequency was related to the central incisor angulation. This acoustic signal was recorded on one channel of a digital tape recorder. After calibration of the unit, the denture duplicate was inserted into the subject's mouth, and the signal of the /s/ sounds subsequently produced by the subject was recorded on the second channel during alteration of the inclination angle simultaneously with the generator signal. Spectral analysis was performed using a Kay Speech-Lab 4300B. Labial displacement in particular produced significant changes in spectral characteristics, with the lower boundary frequency of the /s/ sound being raised and the upper boundary frequency being reduced. Maxillary incisor position influences /s/ sound production. Displacement of the maxillary incisors must be considered a cause of immediate changes in /s/ sound distortion. Therefore, denture teeth should be placed in the original tooth position as accurately as possible. Our results also indicate that neuromuscular reactions are more important for initial speech sound distortions than are aerodynamic changes in the anterior speech sound-producing areas.

  3. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    PubMed

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  4. Popcorn: critical temperature, jump and sound.

    PubMed

    Virot, Emmanuel; Ponomarenko, Alexandre

    2015-03-06

    Popcorn bursts open, jumps and emits a 'pop' sound in some hundredths of a second. The physical origin of these three observations remains unclear in the literature. We show that the critical temperature 180°C at which almost all of popcorn pops is consistent with an elementary pressure vessel scenario. We observe that popcorn jumps with a 'leg' of starch which is compressed on the ground. As a result, popcorn is midway between two categories of moving systems: explosive plants using fracture mechanisms and jumping animals using muscles. By synchronizing video recordings with acoustic recordings, we propose that the familiar 'pop' sound of the popcorn is caused by the release of water vapour. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Impact of toothpaste on abrasion of sound and eroded enamel: An in vitro white light interferometer study.

    PubMed

    Nakamura, Maria; Kitasako, Yuichi; Nakashima, Syozi; Sadr, Alireza; Tagami, Junji

    2015-10-01

    To evaluate the influence of brushing using toothpastes marketed under different categories on abrasion of sound and eroded enamel in vitro at nanometer scale using a white light interferometer (WLI). Enamel surface of resin-embedded bovine incisors were fine polished with diamond slurry and divided into testing area (approximately 2 mm x 4 mm) and reference area using a nail varnish. The enamel specimens were randomly assigned to 10 groups (n = 10 each); six of which were subjected to erosive challenge. The testing area in these eroded groups was exposed to 10 ml of Coca-Cola for 90 seconds and then rinsed for 10 seconds in deionized water (DW). Enamel specimens, except for those in one eroded group, were brushed by an automatic brushing machine with 120 linear motion strokes in 60 seconds under load of 250 g with/without toothpaste slurry. After the toothbrushing abrasion, each specimen was rinsed for 10 seconds with DW followed by immersion in artificial saliva for 2 hours. Toothpaste slurries were prepared containing one of the four toothpastes used and DW in a ratio of 1:2. The erosion-abrasion cycle was repeated three times. Then, the nail varnish was removed and enamel surface loss (SL) was measured by the WLI. Data were statistically analyzed by one-way ANOVA followed by Bonferroni's correction at significance level of 0.05. For eroded specimens, the mean SL values of groups not brushed and brushed with no toothpaste were not significantly different, but were significantly lower than those of whitening, anti-erosion and anti-caries toothpaste groups (P < 0.001). The whitening toothpaste group showed significantly higher SL than all other groups (P < 0.001). For sound enamel specimens, SL was not measured except for the whitening toothpaste group.

  6. Socialized Perception and L2 Pronunciation among Spanish-Speaking Learners of English in Puerto Rico

    ERIC Educational Resources Information Center

    Perez, Marisol Santiago

    2017-01-01

    The purpose of this study is to validate the following hypothesis: First, spoken accents have a major influence and can affect listeners' personal attitudes and second, native Puerto Rican speakers will speak English as a second language without wanting to sound like a North American English speaker. This study will contribute to research on the…

  7. 1. Context view includes Building 59 (second from left). Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Context view includes Building 59 (second from left). Camera is pointed ENE along Farragut Aveune. Buildings on left side of street are, from left: Building 856, Building 59 and Building 107. On right side of street they are, from right; Building 38, Building 452 and Building 460. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA

  8. A Culinary Cornucopia at the Second Annual Ethnic Food Cook-off | Poster

    Cancer.gov

    Imagine traveling the globe, sampling cuisines from countries as disparate as Russia, Switzerland, India, and Ethiopia. Sounds like a dream vacation, doesn’t it? Thanks to the Employee Diversity Team (EDT) and its group of gastronomic volunteers, the NCI at Frederick community was taken on this culinary journey, gratis, at the second annual Ethnic Food Cook-off. Read more...

  9. Effects of Sound, Vocabulary, and Grammar Learning Aptitude on Adult Second Language Speech Attainment in Foreign Language Classrooms

    ERIC Educational Resources Information Center

    Saito, Kazuya

    2017-01-01

    This study examines the relationship between different types of language learning aptitude (measured via the LLAMA test) and adult second language (L2) learners' attainment in speech production in English-as-a-foreign-language (EFL) classrooms. Picture descriptions elicited from 50 Japanese EFL learners from varied proficiency levels were analyzed…

  10. The Negotiations of Group Authorship among Second Graders Using Multimedia Composing Software. Apple Classrooms of Tomorrow.

    ERIC Educational Resources Information Center

    Reilly, Brian

    Beginning with a review of relevant literature on learning and computers, this report focuses on a group of five second graders in the process of creating a multimedia presentation for their class. Using "StoryShow," software that combines images, sound, and text, the students took on a variety of production roles. Each one contributed…

  11. Second flight of the Focusing Optics X-ray Solar Imager sounding rocket [FOXSI-2

    NASA Astrophysics Data System (ADS)

    Buitrago-Casas, J. C.; Krucker, S.; Christe, S.; Glesener, L.; Ishikawa, S. N.; Ramsey, B.; Foster, N. D.

    2015-12-01

    The Focusing Optics X-ray Solar Imager (FOXSI) is a sounding rocket experiment that has flown twice to test a direct focusing method for measuring solar hard X-rays (HXRs). These HXRs are associated with particle acceleration mechanisms at work in powering solar flares and aid us in investigating the role of nanoflares in heating the solar corona. FOXSI-1 successfully flew for the first time on November 2, 2012. After some upgrades including the addition of extra mirrors to two optics modules and the inclusion of new fine-pitch CdTe strip detectors, in addition to the Si detectors from FOXSI-1, the FOXSI-2 payload flew successfully again on December 11, 2014. During the second flight four targets on the Sun were observed, including at least three active regions, two microflares, and ~1 minute of quiet Sun observation. This work is focused in giving an overview of the FOXSI rocket program and a detailed description of the upgrades for the second flight. In addition, we show images and spectra investigating the presence of no thermal emission for each of the flaring targets that we observed during the second flight.

  12. Linking the shapes of alphabet letters to their sounds: the case of Hebrew

    PubMed Central

    Levin, Iris; Kessler, Brett

    2011-01-01

    Learning the sounds of letters is an important part of learning a writing system. Most previous studies of this process have examined English, focusing on variations in the phonetic iconicity of letter names as a reason why some letter sounds (such as that of b, where the sound is at the beginning of the letter’s name) are easier to learn than others (such as that of w, where the sound is not in the name). The present study examined Hebrew, where variations in the phonetic iconicity of letter names are minimal. In a study of 391 Israeli children with a mean age of 5 years, 10 months, we used multilevel models to examine the factors that are associated with knowledge of letter sounds. One set of factors involved letter names: Children sometimes attributed to a letter a consonant–vowel sound consisting of the first phonemes of the letter’s name. A second set of factors involved contrast: Children had difficulty when there was relatively little contrast in shape between one letter and others. Frequency was also important, encompassing both child-specific effects, such as a benefit for the first letter of a child’s forename, and effects that held true across children, such as a benefit for the first letters of the alphabet. These factors reflect general properties of human learning. PMID:22345901

  13. Event-Related Brain Potential Investigation of Preparation for Speech Production in Late Bilinguals

    PubMed Central

    Wu, Yan Jing; Thierry, Guillaume

    2011-01-01

    It has been debated how bilinguals select the intended language and prevent interference from the unintended language when speaking. Here, we studied the nature of the mental representations accessed by late fluent bilinguals during a rhyming judgment task relying on covert speech production. We recorded event-related brain potentials in Chinese–English bilinguals and monolingual speakers of English while they indicated whether the names of pictures presented on a screen rhymed.  Whether bilingual participants focussed on rhyming selectively in English or Chinese, we found a significant priming effect of language-specific sound repetition. Surprisingly, however, sound repetitions in Chinese elicited significant priming effects even when the rhyming task was performed in English. This cross-language priming effect was delayed by ∼200  ms as compared to the within-language effect and was asymmetric, since there was no priming effect of sound repetitions in English when participants were asked to make rhyming judgments in Chinese. These results demonstrate that second language production hinders, but does not seal off, activation of the first language, whereas native language production appears immune to competition from the second language. PMID:21687468

  14. Lexical representation of novel L2 contrasts

    NASA Astrophysics Data System (ADS)

    Hayes-Harb, Rachel; Masuda, Kyoko

    2005-04-01

    There is much interest among psychologists and linguists in the influence of the native language sound system on the acquisition of second languages (Best, 1995; Flege, 1995). Most studies of second language (L2) speech focus on how learners perceive and produce L2 sounds, but we know of only two that have considered how novel sound contrasts are encoded in learners' lexical representations of L2 words (Pallier et al., 2001; Ota et al., 2002). In this study we investigated how native speakers of English encode Japanese consonant quantity contrasts in their developing Japanese lexicons at different stages of acquisition (Japanese contrasts singleton versus geminate consonants but English does not). Monolingual English speakers, native English speakers learning Japanese for one year, and native speakers of Japanese were taught a set of Japanese nonwords containing singleton and geminate consonants. Subjects then performed memory tasks eliciting perception and production data to determine whether they encoded the Japanese consonant quantity contrast lexically. Overall accuracy in these tasks was a function of Japanese language experience, and acoustic analysis of the production data revealed non-native-like patterns of differentiation of singleton and geminate consonants among the L2 learners of Japanese. Implications for theories of L2 speech are discussed.

  15. Emission of sound from the mammalian inner ear

    NASA Astrophysics Data System (ADS)

    Reichenbach, Tobias; Stefanovic, Aleksandra; Nin, Fumiaki; Hudspeth, A. J.

    2013-03-01

    The mammalian inner ear, or cochlea, not only acts as a detector of sound but can also produce tones itself. These otoacoustic emissions are a striking manifestation of the mechanical active process that sensitizes the cochlea and sharpens its frequency discrimination. It remains uncertain how these signals propagate back to the middle ear, from which they are emitted as sound. Although reverse propagation might occur through waves on the cochlear basilar membrane, experiments suggest the existence of a second component in otoacoustic emissions. We have combined theoretical and experimental studies to show that mechanical signals can also be transmitted by waves on Reissner's membrane, a second elastic structure within the cochea. We have developed a theoretical description of wave propagation on the parallel Reissner's and basilar membranes and its role in the emission of distortion products. By scanning laser interferometry we have measured traveling waves on Reissner's membrane in the gerbil, guinea pig, and chinchilla. The results accord with the theory and thus support a role for Reissner's membrane in otoacoustic emission. T. R. holds a Career Award at the Scientific Interface from the Burroughs Wellcome Fund; A. J. H. is an Investigator of Howard Hughes Medical Institute.

  16. Performance analysis of an IMU-augmented GNSS tracking system on board the MAIUS-1 sounding rocket

    NASA Astrophysics Data System (ADS)

    Braun, Benjamin; Grillenberger, Andreas; Markgraf, Markus

    2018-05-01

    Satellite navigation receivers are adequate tracking sensors for range safety of both orbital launch vehicles and suborbital sounding rockets. Due to high accuracy and its low system complexity, satellite navigation is seen as well-suited supplement or replacement of conventional tracking systems like radar. Having the well-known shortcomings of satellite navigation like deliberate or unintentional interferences in mind, it is proposed to augment the satellite navigation receiver by an inertial measurement unit (IMU) to enhance continuity and availability of localization. The augmented receiver is thus enabled to output at least an inertial position solution in case of signal outages. In a previous study, it was shown by means of simulation using the example of Ariane 5 that the performance of a low-grade microelectromechanical IMU is sufficient to bridge expected outages of some ten seconds, and still meeting the range safety requirements in effect. In this publication, these theoretical findings shall be substantiated by real flight data that were recorded on MAIUS-1, a sounding rocket launched from Esrange, Sweden, in early 2017. The analysis reveals that the chosen representative of a microelectromechanical IMU is suitable to bridge outages of up to thirty seconds.

  17. Method of synthesizing silica nanofibers using sound waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Jaswinder K.; Datskos, Panos G.

    A method for synthesizing silica nanofibers using sound waves is provided. The method includes providing a solution of polyvinyl pyrrolidone, adding sodium citrate and ammonium hydroxide to form a first mixture, adding a silica-based compound to the solution to form a second mixture, and sonicating the second mixture to synthesize a plurality of silica nanofibers having an average cross-sectional diameter of less than 70 nm and having a length on the order of at least several hundred microns. The method can be performed without heating or electrospinning, and instead includes less energy intensive strategies that can be scaled up tomore » an industrial scale. The resulting nanofibers can achieve a decreased mean diameter over conventional fibers. The decreased diameter generally increases the tensile strength of the silica nanofibers, as defects and contaminations decrease with the decreasing diameter.« less

  18. Method of synthesizing silica nanofibers using sound waves

    DOEpatents

    Sharma, Jaswinder K.; Datskos, Panos G.

    2015-09-15

    A method for synthesizing silica nanofibers using sound waves is provided. The method includes providing a solution of polyvinyl pyrrolidone, adding sodium citrate and ammonium hydroxide to form a first mixture, adding a silica-based compound to the solution to form a second mixture, and sonicating the second mixture to synthesize a plurality of silica nanofibers having an average cross-sectional diameter of less than 70 nm and having a length on the order of at least several hundred microns. The method can be performed without heating or electrospinning, and instead includes less energy intensive strategies that can be scaled up to an industrial scale. The resulting nanofibers can achieve a decreased mean diameter over conventional fibers. The decreased diameter generally increases the tensile strength of the silica nanofibers, as defects and contaminations decrease with the decreasing diameter.

  19. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species.

    PubMed

    Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.

  20. Neural Correlates of Central Inhibition during Physical Fatigue

    PubMed Central

    Tanaka, Masaaki; Ishii, Akira; Watanabe, Yasuyoshi

    2013-01-01

    Central inhibition plays a pivotal role in determining physical performance during physical fatigue. Classical conditioning of central inhibition is believed to be associated with the pathophysiology of chronic fatigue. We tried to determine whether classical conditioning of central inhibition can really occur and to clarify the neural mechanisms of central inhibition related to classical conditioning during physical fatigue using magnetoencephalography (MEG). Eight right-handed volunteers participated in this study. We used metronome sounds as conditioned stimuli and maximum handgrip trials as unconditioned stimuli to cause central inhibition. Participants underwent MEG recording during imagery of maximum grips of the right hand guided by metronome sounds for 10 min. Thereafter, fatigue-inducing maximum handgrip trials were performed for 10 min; the metronome sounds were started 5 min after the beginning of the handgrip trials. The next day, neural activities during imagery of maximum grips of the right hand guided by metronome sounds were measured for 10 min. Levels of fatigue sensation and sympathetic nerve activity on the second day were significantly higher relative to those of the first day. Equivalent current dipoles (ECDs) in the posterior cingulated cortex (PCC), with latencies of approximately 460 ms, were observed in all the participants on the second day, although ECDs were not identified in any of the participants on the first day. We demonstrated that classical conditioning of central inhibition can occur and that the PCC is involved in the neural substrates of central inhibition related to classical conditioning during physical fatigue. PMID:23923034

  1. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species

    PubMed Central

    Quispe-Soncco, Raisa

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630

  2. The privileged status of locality in consonant harmony

    PubMed Central

    Finley, Sara

    2011-01-01

    While the vast majority of linguistic processes apply locally, consonant harmony appears to be an exception. In this phonological process, consonants share the same value of a phonological feature, such as secondary place of articulation. In sibilant harmony, [s] and [ʃ] (‘sh’) alternate such that if a word contains the sound [ʃ], all [s] sounds become [ʃ]. This can apply locally as a first-order or non-locally as a second-order pattern. In the first-order case, no consonants intervene between the two sibilants (e.g., [pisasu], [piʃaʃu]). In second-order case, a consonant may intervene (e.g., [sipasu], [ʃipaʃu]). The fact that there are languages that allow second-order non-local agreement of consonant features has led some to question whether locality constraints apply to consonant harmony. This paper presents the results from two artificial grammar learning experiments that demonstrate the privileged role of locality constraints, even in patterns that allow second-order non-local interactions. In Experiment 1, we show that learners do not extend first-order non-local relationships in consonant harmony to second-order nonlocal relationships. In Experiment 2, we show that learners will extend a consonant harmony pattern with second-order long distance relationships to a consonant harmony with first-order long distance relationships. Because second-order non-local application implies first-order non-local application, but first-order non-local application does not imply second-order non-local application, we establish that local constraints are privileged even in consonant harmony. PMID:21686094

  3. Augmenting Comprehension of Speech in Noise with a Facial Avatar and Its Effect on Performance

    DTIC Science & Technology

    2010-12-01

    develop some aspects of speech more slowly than sighted children. In addition to “bleeping” or blanking the sound of censored words, network...the speech. Movie files were exported at a resolution of 600 by 800 pixels at 30 frames per second and were four seconds in length. It should be...noted that the speech, and synchronized facial movements, began one second after each movie file started. This delay was designed to ensure that the

  4. Evaluation of Trauma Team Performance Using an Advanced Human Patient Simulator for Resuscitation Training

    DTIC Science & Technology

    2002-06-01

    Breathing 1. Breathing assessed 1=3-5minutes 2=ɛminutes a. Auscultation 0- > 60 seconds 1=30-60seconds 2=ណseconds 2. Recognized tension...pneumothorax a. Difference in auscultated breath sounds 0= > 3 m1nutes (time to awareness of difference) b. Time to decompression of ptx 3. Needle...vitals 2. Time to oxygen applied 3 Time to adequate pressure applied to extremity 4 Time to auscultation 5. Time to recognition of pneumothorax 6

  5. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  6. The reduction of gunshot noise and auditory risk through the use of firearm suppressors and low-velocity ammunition.

    PubMed

    Murphy, William J; Flamme, Gregory A; Campbell, Adam R; Zechmann, Edward L; Tasko, Stephen M; Lankford, James E; Meinke, Deanna K; Finan, Donald S; Stewart, Michael

    2018-02-01

    This research assessed the reduction of peak levels, equivalent energy and sound power of firearm suppressors. The first study evaluated the effect of three suppressors at four microphone positions around four firearms. The second study assessed the suppressor-related reduction of sound power with a 3 m hemispherical microphone array for two firearms. The suppressors reduced exposures at the ear between 17 and 24 dB peak sound pressure level and reduced the 8 h equivalent A-weighted energy between 9 and 21 dB depending upon the firearm and ammunition. Noise reductions observed for the instructor's position about a metre behind the shooter were between 20 and 28 dB peak sound pressure level and between 11 and 26 dB L Aeq,8h . Firearm suppressors reduced the measured sound power levels between 2 and 23 dB. Sound power reductions were greater for the low-velocity ammunition than for the same firearms fired with high-velocity ammunition due to the effect of N-waves produced by a supersonic bullet. Firearm suppressors may reduce noise exposure, and the cumulative exposures of suppressed firearms can still present a significant hearing risk. Therefore, firearm users should always wear hearing protection whenever target shooting or hunting.

  7. Pilot study: Exposure and materiality of the secondary room and its impact in the impulse response of coupled-volume concert halls

    NASA Astrophysics Data System (ADS)

    Ermann, Michael; Johnson, Marty E.

    2002-05-01

    What does one room sound like when it is partially exposed to another (acoustically coupled)? More specifically, this research aims to quantify how operational and design decisions impact aural impressions in the design of concert halls with acoustical coupling. By adding a second room to a concert hall, and designing doors to control the sonic transparency between the two rooms, designers can create a new, coupled acoustic. Concert halls use coupling to achieve a variable, longer, and distinct reverberant quality for their musicians and listeners. For this study, a coupled-volume shoebox concert hall was conceived with a fixed geometric volume, form, and primary-room sound absorption. Aperture size and secondary-room sound-absorption levels were established as variables. Statistical analysis of sound decay in this simulated hall suggests a highly sensitive relationship between the double-sloped condition and (1) Architectural composition, as defined by the aperture size exposing the chamber and (2) Materiality, as defined by the sound absorbance in the coupled volume. Preliminary calculations indicate that the double-sloped sound decay condition only appears when the total aperture area is less than 1.5% of the total shoebox surface area and the average absorption coefficient of the coupled volume is less than 0.07.

  8. Magnetoencephalographic responses in relation to temporal and spatial factors of sound fields

    NASA Astrophysics Data System (ADS)

    Soeta, Yoshiharu; Nakagawa, Seiji; Tonoike, Mitsuo; Hotehama, Takuya; Ando, Yoichi

    2004-05-01

    To establish the guidelines based on brain functions for designing sound fields such as a concert hall and an opera house, the activities of the human brain to the temporal and spatial factors of the sound field have been investigated using magnetoencephalography (MEG). MEG is a noninvasive technique for investigating neuronal activity in human brain. First of all, the auditory evoked responses in change of the magnitude of the interaural cross-correlation (IACC) were analyzed. IACC is one of the spatial factors, which has great influence on the degree of subjective preference and diffuseness for sound fields. The results indicated that the peak amplitude of N1m, which was found over the left and right temporal lobes around 100 ms after the stimulus onset, decreased with increasing the IACC. Second, the responses corresponding to subjective preference for one of the typical temporal factors, i.e., the initial delay gap between a direct sound and the first reflection, were investigated. The results showed that the effective duration of the autocorrelation function of MEG between 8 and 13 Hz became longer during presentations of a preferred stimulus. These results indicate that the brain may be relaxed, and repeat a similar temporal rhythm under preferred sound fields.

  9. The Influence of refractoriness upon comprehension of non-verbal auditory stimuli.

    PubMed

    Crutch, Sebastian J; Warrington, Elizabeth K

    2008-01-01

    An investigation of non-verbal auditory comprehension in two patients with global aphasia following stroke is reported. The primary aim of the investigation was to establish whether refractory access disorders can affect non-verbal input modalities. All previous reports of refractoriness, a cognitive syndrome characterized by response inconsistency, sensitivity to temporal factors and insensitivity to item frequency, have involved comprehension tasks which have a verbal component. Two main experiments are described. The first consists of a novel sound-to-picture and sound-to-word matching task in which comprehension of environmental sounds is probed under conditions of semantic relatedness and semantic unrelatedness. In addition to the two stroke patients, the performance of a group of 10 control patients with non-vascular pathology is reported, along with evidence of semantic relatedness effects in sound comprehension. The second experiment examines environmental sound comprehension within a repetitive probing paradigm which affords assessment of the effects of semantic relatedness, response consistency and presentation rate. It is demonstrated that the two stroke patients show a significant increase in error rate across multiple probes of the same set of sound stimuli, indicating the presence of refractoriness within this non-verbal domain. The implications of the results are discussed with reference to our current understanding of the mechanisms of refractoriness.

  10. A description of externally recorded womb sounds in human subjects during gestation

    PubMed Central

    Daland, Robert; Kesavan, Kalpashri; Macey, Paul M.; Zeltzer, Lonnie; Harper, Ronald M.

    2018-01-01

    Objective Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Study design Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Results Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500–5,000 Hz) and mid-frequency (100–500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10–100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. Conclusions High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU. PMID:29746604

  11. A description of externally recorded womb sounds in human subjects during gestation.

    PubMed

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU.

  12. More Bits and Pieces: A Second Physics Miscellany

    ERIC Educational Resources Information Center

    Siddons, J. C.

    1976-01-01

    Described are five physics experiments utilizing inexpensive, readily available materials or materials normally found in a physics laboratory. Included are investigations of electrical charge, sound detection, thermal expansion, doppler effects, and the cycloid. (SL)

  13. How and When Does the Second Language Influence the Production of Native Speech Sounds: A Literature Review

    ERIC Educational Resources Information Center

    Kartushina, Natalia; Frauenfelder, Ulrich H.; Golestani, Narly

    2016-01-01

    In bilinguals and second language learners, the native (L1) and nonnative (L2) languages coexist and interact. The L1 influences L2 production via forward transfer, as is seen with foreign accents. However, language transfer is bidirectional: even brief experience with an L2 can affect L1 production, via backward transfer. Here, we review the…

  14. HIFiRE Flight 2 Flowpath Design Update (PREPRINT)

    DTIC Science & Technology

    2009-12-01

    will use a sounding rocket stack and a novel second-stage ignition approach to achieve a nearly constant flight dynamic pressure over this range of...Mach numbers. The experimental payload will remain attached to the second-stage rocket motor and the experiment will occur while accelerating through...weight and drag estimates necessary for trajectory analyses to be conducted using candidate rocket motors . The preliminary trajectory analyses

  15. Lung sound intensity in patients with emphysema and in normal subjects at standardised airflows.

    PubMed Central

    Schreur, H J; Sterk, P J; Vanderschoot, J; van Klink, H C; van Vollenhoven, E; Dijkman, J H

    1992-01-01

    BACKGROUND: A common auscultatory finding in pulmonary emphysema is a reduction of lung sounds. This might be due to a reduction in the generation of sounds due to the accompanying airflow limitation or to poor transmission of sounds due to destruction of parenchyma. Lung sound intensity was investigated in normal and emphysematous subjects in relation to airflow. METHODS: Eight normal men (45-63 years, FEV1 79-126% predicted) and nine men with severe emphysema (50-70 years, FEV1 14-63% predicted) participated in the study. Emphysema was diagnosed according to pulmonary history, results of lung function tests, and radiographic criteria. All subjects underwent phonopneumography during standardised breathing manoeuvres between 0.5 and 2 1 below total lung capacity with inspiratory and expiratory target airflows of 2 and 1 l/s respectively during 50 seconds. The synchronous measurements included airflow at the mouth and lung volume changes, and lung sounds at four locations on the right chest wall. For each microphone airflow dependent power spectra were computed by using fast Fourier transformation. Lung sound intensity was expressed as log power (in dB) at 200 Hz at inspiratory flow rates of 1 and 2 l/s and at an expiratory flow rate of 1 l/s. RESULTS: Lung sound intensity was well repeatable on two separate days, the intraclass correlation coefficient ranging from 0.77 to 0.94 between the four microphones. The intensity was strongly influenced by microphone location and airflow. There was, however, no significant difference in lung sound intensity at any flow rate between the normal and the emphysema group. CONCLUSION: Airflow standardised lung sound intensity does not differ between normal and emphysematous subjects. This suggests that the auscultatory finding of diminished breath sounds during the regular physical examination in patients with emphysema is due predominantly to airflow limitation. Images PMID:1440459

  16. Imaging of sound speed using reflection ultrasound tomography.

    PubMed

    Nebeker, Jakob; Nelson, Thomas R

    2012-09-01

    The goal of this work was to obtain and evaluate measurements of tissue sound speed in the breast, particularly dense breasts, using backscatter ultrasound tomography. An automated volumetric breast ultrasound scanner was constructed for imaging the prone patient. A 5- to 7-MHz linear array transducer acquired 17,920 radiofrequency pulse echo A-lines from the breast, and a back-wall reflector rotated over 360° in 25 seconds. Sound speed images used reflector echoes that after preprocessing were uploaded into a graphics processing unit for filtered back-projection reconstruction. A velocimeter also was constructed to measure the sound speed and attenuation for comparison to scanner performance. Measurements were made using the following: (1) deionized water from 22°C to 90°C; (2) various fluids with sound speeds from 1240 to 1904 m/s; (3) acrylamide gel test objects with features from 1 to 15 mm in diameter; and (4) healthy volunteers. The mean error ± SD between sound speed reference and image data was -0.48% ± 9.1%, and the error between reference and velocimeter measurements was -1.78% ± 6.50%. Sound speed image and velocimeter measurements showed a difference of 0.10% ± 4.04%. Temperature data showed a difference between theory and imaging performance of -0.28% ± 0.22%. Images of polyacrylamide test objects showed detectability of an approximately 1% sound speed difference in a 2.4-mm cylindrical inclusion with a contrast to noise ratio of 7.9 dB. An automated breast scanner offers the potential to make consistent automated tomographic images of breast backscatter, sound speed, and attenuation, potentially improving diagnosis, particularly in dense breasts.

  17. Analysis of swallowing sounds using hidden Markov models.

    PubMed

    Aboofazeli, Mohammad; Moussavi, Zahra

    2008-04-01

    In recent years, acoustical analysis of the swallowing mechanism has received considerable attention due to its diagnostic potentials. This paper presents a hidden Markov model (HMM) based method for the swallowing sound segmentation and classification. Swallowing sound signals of 15 healthy and 11 dysphagic subjects were studied. The signals were divided into sequences of 25 ms segments each of which were represented by seven features. The sequences of features were modeled by HMMs. Trained HMMs were used for segmentation of the swallowing sounds into three distinct phases, i.e., initial quiet period, initial discrete sounds (IDS) and bolus transit sounds (BTS). Among the seven features, accuracy of segmentation by the HMM based on multi-scale product of wavelet coefficients was higher than that of the other HMMs and the linear prediction coefficient (LPC)-based HMM showed the weakest performance. In addition, HMMs were used for classification of the swallowing sounds of healthy subjects and dysphagic patients. Classification accuracy of different HMM configurations was investigated. When we increased the number of states of the HMMs from 4 to 8, the classification error gradually decreased. In most cases, classification error for N=9 was higher than that of N=8. Among the seven features used, root mean square (RMS) and waveform fractal dimension (WFD) showed the best performance in the HMM-based classification of swallowing sounds. When the sequences of the features of IDS segment were modeled separately, the accuracy reached up to 85.5%. As a second stage classification, a screening algorithm was used which correctly classified all the subjects but one healthy subject when RMS was used as characteristic feature of the swallowing sounds and the number of states was set to N=8.

  18. LANGUAGE DEVELOPMENT. The developmental dynamics of marmoset monkey vocal production.

    PubMed

    Takahashi, D Y; Fenley, A R; Teramoto, Y; Narayanan, D Z; Borjon, J I; Holmes, P; Ghazanfar, A A

    2015-08-14

    Human vocal development occurs through two parallel interactive processes that transform infant cries into more mature vocalizations, such as cooing sounds and babbling. First, natural categories of sounds change as the vocal apparatus matures. Second, parental vocal feedback sensitizes infants to certain features of those sounds, and the sounds are modified accordingly. Paradoxically, our closest living ancestors, nonhuman primates, are thought to undergo few or no production-related acoustic changes during development, and any such changes are thought to be impervious to social feedback. Using early and dense sampling, quantitative tracking of acoustic changes, and biomechanical modeling, we showed that vocalizations in infant marmoset monkeys undergo dramatic changes that cannot be solely attributed to simple consequences of growth. Using parental interaction experiments, we found that contingent parental feedback influences the rate of vocal development. These findings overturn decades-old ideas about primate vocalizations and show that marmoset monkeys are a compelling model system for early vocal development in humans. Copyright © 2015, American Association for the Advancement of Science.

  19. Description and Flight Performance Results of the WASP Sounding Rocket

    NASA Technical Reports Server (NTRS)

    De Pauw, J. F.; Steffens, L. E.; Yuska, J. A.

    1968-01-01

    A general description of the design and construction of the WASP sounding rocket and of the performance of its first flight are presented. The purpose of the flight test was to place the 862-pound (391-kg) spacecraft above 250 000 feet (76.25 km) on free-fall trajectory for at least 6 minutes in order to study the effect of "weightlessness" on a slosh dynamics experiment. The WASP sounding rocket fulfilled its intended mission requirements. The sounding rocket approximately followed a nominal trajectory. The payload was in free fall above 250 000 feet (76.25 km) for 6.5 minutes and reached an apogee altitude of 134 nautical miles (248 km). Flight data including velocity, altitude, acceleration, roll rate, and angle of attack are discussed and compared to nominal performance calculations. The effect of residual burning of the second stage motor is analyzed. The flight vibration environment is presented and analyzed, including root mean square (RMS) and power spectral density analysis.

  20. Measurement of heart sounds with EMFi transducer.

    PubMed

    Kärki, Satu; Kääriäinen, Minna; Lekkala, Jukka

    2007-01-01

    A measurement system for heart sounds was implemented by using ElectroMechanical Film (EMFi). Heart sounds are produced by the vibrations of the cardiac structure. An EMFi transducer attached to the skin of the chest wall converts these mechanical vibrations into an electrical signal. Furthermore, the signal is amplified and transmitted to the computer. The data is analyzed with Matlab software. The low-frequency components of the measured signal (respiration and pulsation of the heart) are filtered out as well as the 50 Hz noise. Also the power spectral density (PSD) plot is computed. In test measurements, the signal was measured with respiration and by holding breath. From the filtered signal, the first (S1) and the second (S2) heart sound can be clearly seen in both cases. In addition, from the raw data signals the respiration frequency and the heart rate can be determined. In future applications, with the EMFi material it is possible to implement a plaster-like transducer measuring vital signals.

  1. Underwater Sound: Deep-Ocean Propagation: Variations of temperature and pressure have great influence on the propagation of sound in the ocean.

    PubMed

    Frosch, R A

    1964-11-13

    The absorption of sound in sea water varies markedly with frequency, being much greater at high than at low frequencies. It is sufficiently small at frequencies below several kilocycles per second, however, to permit propagation to thousands of miles. Oceanographic factors produce variations in sound velocity with depth, and these variations have a strong influence on long-range propagation. The deep ocean is characterized by a strong channel, generally at a depth of 500 to 1500 meters. In addition to guided propagation in this channel, the velocity structure gives rise to strongly peaked propagation from surface sources to surface receivers 48 to 56 kilometers away, with strong shadow zones of weak intensity in between. The near-surface shadow zone, in the latter case, may be filled in by bottom reflections or near-surface guided propagation due to a surface isothermal layer. The near-surface shadow zones can be avoided with certainty only through locating sources and receivers deep in the ocean.

  2. Puget Sound Dredged Disposal Analysis (PSDDA). Unconfined, Open-Water Disposal Sites for Dredged Material. Phase 1 (Central Puget Sound). National Environmental Policy Act (NEPA)/State Environmental Policy Act (SEPA)

    DTIC Science & Technology

    1988-06-01

    confined to a relatively small area. In 400 feet of water the descending cloud is approximately 250 feet in diameter (B. Trawle, personal communica- tion...when it hits the bottom, occuring 30 seconds after disposal is initiated. The collapsing cloud then spreads out in all directions. Ten minutes later...Compliance inspection6. Environ- mental monitorig an permin dp&nce insp ecti, arso part -disposal site management, are described in the MPR and the Management

  3. The Absent Presence of the Parental Generation: Incest and the Ordering of Experience in The Sound and The Fury

    DTIC Science & Technology

    1993-04-25

    Father and son are secret sharers in defeat, and the argument between them echoed in the second section suggests not so much a duel as a doleful...in a duel . Failing in all five parts of Wyatt-Brown’s definition of honorable conduct, Quentin shamefully relives the act in many forms throughout his... Psychoanalysis . 3 (1975): 151-62. Bleikasten, Andre. The Ink of Melancholy: Faulkner’s Novels from The Sound and the Fury to Light in August. Bloom- ington

  4. Acoustic scale modelling of factories, part II: 1-50 Cale model investigations of factory sound fields

    NASA Astrophysics Data System (ADS)

    Hodgson, M. R.; Orlowski, R. J.

    1987-03-01

    In this second part of a report on factory scale modelling use of a 1:50 scale variable model as a research tool is described. Details of the model are presented. The results of measurements of reverberation time and sound propagation, made in various model configurations, are used to investigate the main factors influencing factory sound fields, and the applicability of the Sabine theory to factories. The parameters investigated are the enclosure geometry (aspect ratio, volume and roof pitch), surface absorption and fittings (density, size, surface area, vertical distribution and specific types). Despite certain limitations and uncertainties resulting, for example, from surprising results associated with surface absorption, models are shown to be effective research tools. The inapplicability of the Sabine theory is confirmed and elucidated.

  5. An application of boundary element method calculations to hearing aid systems: The influence of the human head

    NASA Astrophysics Data System (ADS)

    Rasmussen, Karsten B.; Juhl, Peter

    2004-05-01

    Boundary element method (BEM) calculations are used for the purpose of predicting the acoustic influence of the human head in two cases. In the first case the sound source is the mouth and in the second case the sound is plane waves arriving from different directions in the horizontal plane. In both cases the sound field is studied in relation to two positions above the right ear being representative of hearing aid microphone positions. Both cases are relevant for hearing aid development. The calculations are based upon a direct BEM implementation in Matlab. The meshing is based on the original geometrical data files describing the B&K Head and Torso Simulator 4128 combined with a 3D scan of the pinna.

  6. A Mulit-State Model for Catalyzing the Home Energy Efficiency Market

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackmon, Glenn

    The RePower Kitsap partnership sought to jump-start the market for energy efficiency upgrades in Kitsap County, an underserved market on Puget Sound in Washington State. The Washington State Department of Commerce partnered with Washington State University (WSU) Energy Program to supplement and extend existing utility incentives offered by Puget Sound Energy (PSE) and Cascade Natural Gas and to offer energy efficiency finance options through the Kitsap Credit Union and Puget Sound Cooperative Credit Union (PSCCU). RePower Kitsap established a coordinated approach with a second Better Buildings Neighborhood Program project serving the two largest cities in the county – Bainbridge Islandmore » and Bremerton. These two projects shared both the “RePower” brand and implementation team (Conservation Services Group (CSG) and Earth Advantage).« less

  7. Use of signal analysis of heart sounds and murmurs to assess severity of mitral valve regurgitation attributable to myxomatous mitral valve disease in dogs.

    PubMed

    Ljungvall, Ingrid; Ahlstrom, Christer; Höglund, Katja; Hult, Peter; Kvart, Clarence; Borgarelli, Michele; Ask, Per; Häggström, Jens

    2009-05-01

    To investigate use of signal analysis of heart sounds and murmurs in assessing severity of mitral valve regurgitation (mitral regurgitation [MR]) in dogs with myxomatous mitral valve disease (MMVD). 77 client-owned dogs. Cardiac sounds were recorded from dogs evaluated by use of auscultatory and echocardiographic classification systems. Signal analysis techniques were developed to extract 7 sound variables (first frequency peak, murmur energy ratio, murmur duration > 200 Hz, sample entropy and first minimum of the auto mutual information function of the murmurs, and energy ratios of the first heart sound [S1] and second heart sound [S2]). Significant associations were detected between severity of MR and all sound variables, except the energy ratio of S1. An increase in severity of MR resulted in greater contribution of higher frequencies, increased signal irregularity, and decreased energy ratio of S2. The optimal combination of variables for distinguishing dogs with high-intensity murmurs from other dogs was energy ratio of S2 and murmur duration > 200 Hz (sensitivity, 79%; specificity, 71%) by use of the auscultatory classification. By use of the echocardiographic classification, corresponding variables were auto mutual information, first frequency peak, and energy ratio of S2 (sensitivity, 88%; specificity, 82%). Most of the investigated sound variables were significantly associated with severity of MR, which indicated a powerful diagnostic potential for monitoring MMVD. Signal analysis techniques could be valuable for clinicians when performing risk assessment or determining whether special care and more extensive examinations are required.

  8. Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2011-10-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.

  9. Harmonic Hopping, and Both Punctuated and Gradual Evolution of Acoustic Characters in Selasphorus Hummingbird Tail-Feathers

    PubMed Central

    Clark, Christopher James

    2014-01-01

    Models of character evolution often assume a single mode of evolutionary change, such as continuous, or discrete. Here I provide an example in which a character exhibits both types of change. Hummingbirds in the genus Selasphorus produce sound with fluttering tail-feathers during courtship. The ancestral character state within Selasphorus is production of sound with an inner tail-feather, R2, in which the sound usually evolves gradually. Calliope and Allen's Hummingbirds have evolved autapomorphic acoustic mechanisms that involve feather-feather interactions. I develop a source-filter model of these interactions. The ‘source’ comprises feather(s) that are both necessary and sufficient for sound production, and are aerodynamically coupled to neighboring feathers, which act as filters. Filters are unnecessary or insufficient for sound production, but may evolve to become sources. Allen's Hummingbird has evolved to produce sound with two sources, one with feather R3, another frequency-modulated sound with R4, and their interaction frequencies. Allen's R2 retains the ancestral character state, a ∼1 kHz “ghost” fundamental frequency masked by R3, which is revealed when R3 is experimentally removed. In the ancestor to Allen's Hummingbird, the dominant frequency has ‘hopped’ to the second harmonic without passing through intermediate frequencies. This demonstrates that although the fundamental frequency of a communication sound may usually evolve gradually, occasional jumps from one character state to another can occur in a discrete fashion. Accordingly, mapping acoustic characters on a phylogeny may produce misleading results if the physical mechanism of production is not known. PMID:24722049

  10. Visualization of Heart Sounds and Motion Using Multichannel Sensor

    NASA Astrophysics Data System (ADS)

    Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko

    2010-06-01

    As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.

  11. Nonspeech oral motor treatment issues related to children with developmental speech sound disorders.

    PubMed

    Ruscello, Dennis M

    2008-07-01

    This article examines nonspeech oral motor treatments (NSOMTs) in the population of clients with developmental speech sound disorders. NSOMTs are a collection of nonspeech methods and procedures that claim to influence tongue, lip, and jaw resting postures; increase strength; improve muscle tone; facilitate range of motion; and develop muscle control. In the case of developmental speech sound disorders, NSOMTs are employed before or simultaneous with actual speech production treatment. First, NSOMTs are defined for the reader, and there is a discussion of NSOMTs under the categories of active muscle exercise, passive muscle exercise, and sensory stimulation. Second, different theories underlying NSOMTs along with the implications of the theories are discussed. Finally, a review of pertinent investigations is presented. The application of NSOMTs is questionable due to a number of reservations that include (a) the implied cause of developmental speech sound disorders, (b) neurophysiologic differences between the limbs and oral musculature, (c) the development of new theories of movement and movement control, and (d) the paucity of research literature concerning NSOMTs. There is no substantive evidence to support NSOMTs as interventions for children with developmental speech sound disorders.

  12. 3rd grade English language learners making sense of sound

    NASA Astrophysics Data System (ADS)

    Suarez, Enrique; Otero, Valerie

    2013-01-01

    Despite the extensive body of research that supports scientific inquiry and argumentation as cornerstones of physics learning, these strategies continue to be virtually absent in most classrooms, especially those that involve students who are learning English as a second language. This study presents results from an investigation of 3rd grade students' discourse about how length and tension affect the sound produced by a string. These students came from a variety of language backgrounds, and all were learning English as a second language. Our results demonstrate varying levels, and uses, of experiential, imaginative, and mechanistic reasoning strategies. Using specific examples from students' discourse, we will demonstrate some of the productive aspects of working within multiple language frameworks for making sense of physics. Conjectures will be made about how to utilize physics as a context for English Language Learners to further conceptual understanding, while developing their competence in the English language.

  13. Energy Flux in the Cochlea: Evidence Against Power Amplification of the Traveling Wave.

    PubMed

    van der Heijden, Marcel; Versteegh, Corstiaen P C

    2015-10-01

    Traveling waves in the inner ear exhibit an amplitude peak that shifts with frequency. The peaking is commonly believed to rely on motile processes that amplify the wave by inserting energy. We recorded the vibrations at adjacent positions on the basilar membrane in sensitive gerbil cochleae and tested the putative power amplification in two ways. First, we determined the energy flux of the traveling wave at its peak and compared it to the acoustic power entering the ear, thereby obtaining the net cochlear power gain. For soft sounds, the energy flux at the peak was 1 ± 0.6 dB less than the middle ear input power. For more intense sounds, increasingly smaller fractions of the acoustic power actually reached the peak region. Thus, we found no net power amplification of soft sounds and a strong net attenuation of intense sounds. Second, we analyzed local wave propagation on the basilar membrane. We found that the waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach. Thus, we found no local power amplification of soft sounds and strong local attenuation of intense sounds. The most parsimonious interpretation of these findings is that cochlear sensitivity is not realized by amplifying acoustic energy, but by spatially focusing it, and that dynamic compression is realized by adjusting the amount of dissipation to sound intensity.

  14. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography

    PubMed Central

    2013-01-01

    Background It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Methods Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. Results The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. Conclusions We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds with mental fatigue and that the insular cortex is involved in the neural substrates of this phenomenon. PMID:23764106

  15. The Critical Period for Second Language Pronunciation: Is There Such a Thing? Ten Case Studies of Late Starters who Attained a Native-like Hebrew Accent

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim; Kehat, Simona

    2004-01-01

    This paper investigates the critical period hypothesis (CPH) for the acquisition of a second language sound system (phonology) in a naturalistic setting. Ten cases of successful late-starters with a native-like Hebrew pronunciation are presented in an effort to determine possible variables that may account for their exceptional accomplishment. The…

  16. Acoustics and perception of overtone singing.

    PubMed

    Bloothooft, G; Bringmann, E; van Cappellen, M; van Luipen, J B; Thomassen, K P

    1992-10-01

    Overtone singing, a technique of Asian origin, is a special type of voice production resulting in a very pronounced, high and separate tone that can be heard over a more or less constant drone. An acoustic analysis is presented of the phenomenon and the results are described in terms of the classical theory of speech production. The overtone sound may be interpreted as the result of an interaction of closely spaced formants. For the lower overtones, these may be the first and second formant, separated from the lower harmonics by a nasal pole-zero pair, as the result of a nasalized articulation shifting from /c/ to /a/, or, as an alternative, the second formant alone, separated from the first formant by the nasal pole-zero pair, again as the result of a nasalized articulation around /c/. For overtones with a frequency higher than 800 Hz, the overtone sound can be explained as a combination of the second and third formant as the result of a careful, retroflex, and rounded articulation from /c/, via schwa /e/ to /y/ and /i/ for the highest overtones. The results indicate a firm and relatively long closure of the glottis during overtone phonation. The corresponding short open duration of the glottis introduces a glottal formant that may enhance the amplitude of the intended overtone. Perception experiments showed that listeners categorized the overtone sounds differently from normally sung vowels, which possibly has its basis in an independent perception of the small bandwidth of the resonance underlying the overtone. Their verbal judgments were in agreement with the presented phonetic-acoustic explanation.

  17. Brain potentials to native phoneme discrimination reveal the origin of individual differences in learning the sounds of a second language

    PubMed Central

    Díaz, Begoña; Baus, Cristina; Escera, Carles; Costa, Albert; Sebastián-Gallés, Núria

    2008-01-01

    Human beings differ in their ability to master the sounds of their second language (L2). Phonetic training studies have proposed that differences in phonetic learning stem from differences in psychoacoustic abilities rather than speech-specific capabilities. We aimed at finding the origin of individual differences in L2 phonetic acquisition in natural learning contexts. We consider two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. For this purpose, event-related potentials (ERPs) were recorded from two groups of early, proficient Spanish-Catalan bilinguals who differed in their mastery of the Catalan (L2) phonetic contrast /e-ε/. Brain activity in response to acoustic change detection was recorded in three different conditions involving tones of different length (duration condition), frequency (frequency condition), and presentation order (pattern condition). In addition, neural correlates of speech change detection were also assessed for both native (/o/-/e/) and nonnative (/o/-/ö/) phonetic contrasts (speech condition). Participants' discrimination accuracy, reflected electrically as a mismatch negativity (MMN), was similar between the two groups of participants in the three acoustic conditions. Conversely, the MMN was reduced in poor perceivers (PP) when they were presented with speech sounds. Therefore, our results support a speech-specific origin of individual variability in L2 phonetic mastery. PMID:18852470

  18. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory.

    PubMed

    Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi

    2017-06-15

    By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  19. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory

    PubMed Central

    Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi

    2017-01-01

    By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements. PMID:28617326

  20. Human cortical organization for processing vocalizations indicates representation of harmonic structure as a signal attribute

    PubMed Central

    Lewis, James W.; Talkington, William J.; Walker, Nathan A.; Spirou, George A.; Jajosky, Audrey; Frum, Chris

    2009-01-01

    The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g. harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using fMRI we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human non-verbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for putative spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound. PMID:19228981

  1. The cross-linguistic transfer of early literacy skills: the role of initial L1 and L2 skills and language of instruction.

    PubMed

    Cárdenas-Hagan, Elsa; Carlson, Coleen D; Pollard-Durodola, Sharolyn D

    2007-07-01

    The purpose of this study was to examine the effects of initial first and second language proficiencies as well as the language of instruction that a student receives on the relationship between native language ability of students who are English language learners (ELLs) and their development of early literacy skills and the second language. This study investigated the development of early language and literacy skills among Spanish-speaking students in 2 large urban school districts, 1 middle-size urban district, and 1 border district. A total of 1,016 ELLs in kindergarten participated in the study. Students were administered a comprehensive battery of tests in English and Spanish, and classroom observations provided information regarding the Spanish or English language use of the teacher. Findings from this study suggest that Spanish-speaking students with high Spanish letter name and sound knowledge tend to show high levels of English letter name and sound knowledge. ELLs with low Spanish and English letter name and sound knowledge tend to show high levels of English letter name and sound knowledge when they are instructed in English. Letter name and sound identification skills are fairly highly positively correlated across languages in the beginning of the kindergarten year. In addition, phonological awareness skills appear to be the area with the most significant and direct transfer of knowledge, and language skills do not appear to be a factor in the development of phonological awareness. Finally, the relationship between oral language skills across languages was low, suggesting little relationship between oral language skills across languages at the beginning of the kindergarten year. Results from this study suggest that pedagogical decisions for ELLs should not only consider effective instructional literacy strategies but also acknowledge that the language of instruction for Spanish-speaking ELLs may produce varying results for different students.

  2. Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope.

    PubMed

    Ching, Siok Siong; Tan, Yih Kai

    2012-09-07

    To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann(®) Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen between acute large bowel obstruction and large bowel pseudo-obstruction. For patients with small bowel obstruction, the sound-to-sound interval was significantly longer in those who subsequently underwent surgery compared with those treated non-operatively (median 1.29 s vs 0.63 s, P < 0.001). There was no correlation between bowel calibre and bowel sound characteristics in both acute small bowel obstruction and acute large bowel obstruction. Auscultation of bowel sounds is non-specific for diagnosing bowel obstruction. Differences in sound characteristics between large bowel and small bowel obstruction may help determine the likely site of obstruction.

  3. Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope

    PubMed Central

    Ching, Siok Siong; Tan, Yih Kai

    2012-01-01

    AIM: To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. METHODS: Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann® Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. RESULTS: A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen between acute large bowel obstruction and large bowel pseudo-obstruction. For patients with small bowel obstruction, the sound-to-sound interval was significantly longer in those who subsequently underwent surgery compared with those treated non-operatively (median 1.29 s vs 0.63 s, P < 0.001). There was no correlation between bowel calibre and bowel sound characteristics in both acute small bowel obstruction and acute large bowel obstruction. CONCLUSION: Auscultation of bowel sounds is non-specific for diagnosing bowel obstruction. Differences in sound characteristics between large bowel and small bowel obstruction may help determine the likely site of obstruction. PMID:22969233

  4. Cosmological perturbations in mimetic Horndeski gravity

    NASA Astrophysics Data System (ADS)

    Arroja, Frederico; Bartolo, Nicola; Karmakar, Purnendu; Matarrese, Sabino

    2016-04-01

    We study linear scalar perturbations around a flat FLRW background in mimetic Horndeski gravity. In the absence of matter, we show that the Newtonian potential satisfies a second-order differential equation with no spatial derivatives. This implies that the sound speed for scalar perturbations is exactly zero on this background. We also show that in mimetic G3 theories the sound speed is equally zero. We obtain the equation of motion for the comoving curvature perturbation (first order differential equation) and solve it to find that the comoving curvature perturbation is constant on all scales in mimetic Horndeski gravity. We find solutions for the Newtonian potential evolution equation in two simple models. Finally we show that the sound speed is zero on all backgrounds and therefore the system does not have any wave-like scalar degrees of freedom.

  5. Statistical mechanics of self-driven Carnot cycles.

    PubMed

    Smith, E

    1999-10-01

    The spontaneous generation and finite-amplitude saturation of sound, in a traveling-wave thermoacoustic engine, are derived as properties of a second-order phase transition. It has previously been argued that this dynamical phase transition, called "onset," has an equivalent equilibrium representation, but the saturation mechanism and scaling were not computed. In this work, the sound modes implementing the engine cycle are coarse-grained and statistically averaged, in a partition function derived from microscopic dynamics on criteria of scale invariance. Self-amplification performed by the engine cycle is introduced through higher-order modal interactions. Stationary points and fluctuations of the resulting phenomenological Lagrangian are analyzed and related to background dynamical currents. The scaling of the stable sound amplitude near the critical point is derived and shown to arise universally from the interaction of finite-temperature disorder, with the order induced by self-amplification.

  6. The effect of methacholine-induced acute airway narrowing on lung sounds in normal and asthmatic subjects.

    PubMed

    Schreur, H J; Vanderschoot, J; Zwinderman, A H; Dijkman, J H; Sterk, P J

    1995-02-01

    The association between lung sound alterations and airways obstruction has long been recognized in clinical practice, but the precise pathophysiological mechanisms of this relationship have not been determined. Therefore, we examined the changes in lung sounds at well-defined levels of methacholine-induced airway narrowing in eight normal and nine asthmatic subjects with normal baseline lung function. All subjects underwent phonopneumography at baseline condition and at > or = 20% fall in forced expiratory volume in one second (FEV1), and in asthmatic subjects also at > or = 40% fall in FEV1. Lung sounds were recorded at three locations on the chest wall during standardized quiet breathing, and during maximal forced breathing. Airflow-dependent power spectra were computed using fast Fourier transform. For each spectrum, we determined the intensity and frequency content of lung sounds, together with the extent of wheezing. The results were analysed using analysis of variance (ANOVA). During acute airway narrowing, the intensity and frequency content of the recorded sounds, as well as the extent of wheezing, were higher than at baseline in both groups of subjects. At similar levels of obstruction, both the pitch and the change in sound intensity with airflow were higher in asthmatics than in normal subjects. Wheezing, being nondiscriminative between the subject groups at baseline, was more prominent in asthmatics than in normal subjects at 20% fall in FEV1. We conclude that, at given levels of acute airway narrowing, lung sounds differ between asthmatics and normal subjects. This suggests that airflow-standardized phonopneumography is a sensitive method for detecting abnormalities in airway dynamics in asthma.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Adaptation in sound localization processing induced by interaural time difference in amplitude envelope at high frequencies.

    PubMed

    Kawashima, Takayuki; Sato, Takao

    2012-01-01

    When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.

  8. Full Spatial Resolution Infrared Sounding Application in the Preconvection Environment

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, G.; Lin, T.

    2013-12-01

    Advanced infrared (IR) sounders such as the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) provide atmospheric temperature and moisture profiles with high vertical resolution and high accuracy in preconvection environments. The derived atmospheric stability indices such as convective available potential energy (CAPE) and lifted index (LI) from advanced IR soundings can provide critical information 1 ; 6 h before the development of severe convective storms. Three convective storms are selected for the evaluation of applying AIRS full spatial resolution soundings and the derived products on providing warning information in the preconvection environments. In the first case, the AIRS full spatial resolution soundings revealed local extremely high atmospheric instability 3 h ahead of the convection on the leading edge of a frontal system, while the second case demonstrates that the extremely high atmospheric instability is associated with the local development of severe thunderstorm in the following hours. The third case is a local severe storm that occurred on 7-8 August 2010 in Zhou Qu, China, which caused more than 1400 deaths and left another 300 or more people missing. The AIRS full spatial resolution LI product shows the atmospheric instability 3.5 h before the storm genesis. The CAPE and LI from AIRS full spatial resolution and operational AIRS/AMSU soundings along with Geostationary Operational Environmental Satellite (GOES) Sounder derived product image (DPI) products were analyzed and compared. Case studies show that full spatial resolution AIRS retrievals provide more useful warning information in the preconvection environments for determining favorable locations for convective initiation (CI) than do the coarser spatial resolution operational soundings and lower spectral resolution GOES Sounder retrievals. The retrieved soundings are also tested in a regional data assimilation WRF 3D-var system to evaluate the potential assist in the NWP model.

  9. Understanding and managing experiential aspects of soundscapes at Muir woods national monument.

    PubMed

    Pilcher, Ericka J; Newman, Peter; Manning, Robert E

    2009-03-01

    Research has found that human-caused noise can detract from the quality of the visitor experience in national parks and related areas. Moreover, impacts to the visitor experience can be managed by formulating indicators and standards of quality as suggested in park and outdoor recreation management frameworks, such as Visitor Experience and Resource Protection (VERP), as developed by the U.S. National Park Service. The research reported in this article supports the formulation of indicators and standards of quality for human-caused noise at Muir Woods National Monument, California. Phase I identified potential indicators of quality for the soundscape of Muir Woods. A visitor "listening exercise" was conducted, where respondents identified natural and human-caused sounds heard in the park and rated the degree to which each sound was "pleasing" or "annoying." Certain visitor-caused sounds such as groups talking were heard by most respondents and were rated as annoying, suggesting that these sounds may be a good indicator of quality. Loud groups were heard by few people but were rated as highly annoying, whereas wind and water were heard by most visitors and were rated as highly pleasing. Phase II measured standards of quality for visitor-caused noise. Visitors were presented with a series of 30-second audio clips representing increasing amounts of visitor-caused sound in the park. Respondents were asked to rate the acceptability of each audio clip on a survey. Findings suggest a threshold at which visitor-caused sound is judged to be unacceptable, and is therefore considered as noise. A parallel program of sound monitoring in the park found that current levels of visitor-caused sound sometimes violate this threshold. Study findings provide an empirical basis to help formulate noise-related indicators and standards of quality in parks and related areas.

  10. Proceedings of the Second International Congress on Recent Developments in Air- and Structure-Borne Sound and Vibration (2nd) Held in Auburn University, Alabama on 4-6 March 1992. Volume 2

    DTIC Science & Technology

    1992-03-06

    elastic data (uncorrected) on Lithium- Zinc and Lithium- Cadmium ferrites Ferrite Bulk X-ray Per- V Vs E n Composition density density cen- 0 3K -3 10 3 K...weight with increasing zinc and cadmium contents. In the case of Li-Cd ferrites, the values of VI/P , V s /P Vm and eD are also increases continuously...585 Wu Ounli, Nanyang Technololical University, Singapore RECIPROCITY METHOD FOR QUANTIFICATION OF AIRBORNE SOUND TRANSFER FROM MACHINERY

  11. Equations for normal-mode statistics of sound scattering by a rough elastic boundary in an underwater waveguide, including backscattering.

    PubMed

    Morozov, Andrey K; Colosi, John A

    2017-09-01

    Underwater sound scattering by a rough sea surface, ice, or a rough elastic bottom is studied. The study includes both the scattering from the rough boundary and the elastic effects in the solid layer. A coupled mode matrix is approximated by a linear function of one random perturbation parameter such as the ice-thickness or a perturbation of the surface position. A full two-way coupled mode solution is used to derive the stochastic differential equation for the second order statistics in a Markov approximation.

  12. Ultraviolet photometry from the Orbiting Astronomical Observatory. XXI - Absolute energy distribution of stars in the ultraviolet

    NASA Technical Reports Server (NTRS)

    Bless, R. C.; Code, A. D.; Fairchild, E. T.

    1976-01-01

    The absolute energy distribution in the ultraviolet is given for the stars alpha Vir, eta UMa, and alpha Leo. The calibration is based on absolute heterochromatic photometry between 2920 and 1370 A carried out with an Aerobee sounding rocket. The fundamental radiation standard is the synchrotron radiation from 240-MeV electrons in a certain synchrotron storage ring. On the basis of the sounding-rocket calibration, the preliminary OAO-2 spectrometer calibration has been revised; the fluxes for the three program stars are tabulated in energy per second per square centimeter per unit wavelength interval.

  13. Identification of atmospheric structure by coherent microwave sounding

    NASA Technical Reports Server (NTRS)

    Birkemeier, W. P.

    1969-01-01

    Two atmospheric probing experiments involving beyond-the-horizon propagation of microwave signals are reported. In the first experiment, Doppler-shift caused by the cross path wind is measured by a phase lock receiver with the common volume displaced in azimuth from the great circle. Variations in the measured Doppler shift values are explained in terms of variations in atmospheric structure. The second experiment makes use of the pseudorandom sounding signal used in a RAKE communication system. Both multipath delay and Doppler shift are provided by the receiver, permitting the cross section of the atmospheric layer structure to be deduced.

  14. Sound Pressures and Correlations of Noise on the Fuselage of a Jet Aircraft in Flight

    NASA Technical Reports Server (NTRS)

    Shattuck, Russell D.

    1961-01-01

    Tests were conducted at altitudes of 10,000, 20,000, and 30,000 feet at speeds of Mach 0.4, 0.6, and O.8. It was found that the sound pressure levels on the aft fuselage of a jet aircraft in flight can be estimated using an equation involving the true airspeed and the free air density. The cross-correlation coefficient over a spacing of 2.5 feet was generalized with Strouhal number. The spectrum of the noise in flight is comparatively flat up to 10,000 cycles per second.

  15. Neogene and Quaternary geology of a stratigraphic test hole on Horn Island, Mississippi Sound

    USGS Publications Warehouse

    Gohn, Gregory S.; Brewster-Wingard, G. Lynn; Cronin, Thomas M.; Edwards, Lucy E.; Gibson, Thomas G.; Rubin, Meyer; Willard, Debra A.

    1996-01-01

    During April and May, 1991, the U.S. Geological Survey (USGS) drilled a 510-ft-deep, continuously cored, stratigraphic test hole on Horn Island, Mississippi Sound, as part of a field study of the Neogene and Quaternary geology of the Mississippi coastal area. The USGS drilled two new holes at the Horn Island site. The first hole was continuously cored to a depth of 510 ft; coring stopped at this depth due to mechanical problems. To facilitate geophysical logging, an unsampled second hole was drilled to a depth of 519 ft at the same location.

  16. Acoustic Experiment to Measure the Bulk Viscosity of Near-Critical Xenon in Microgravity

    NASA Technical Reports Server (NTRS)

    Gillis, K. A.; Shinder, I.; Moldover, M. R.; Zimmerli, G. A.

    2002-01-01

    We plan a rigorous test of the theory of dynamic scaling by accurately measuring the bulk viscosity of xenon in microgravity 50 times closer to the critical temperature T(sub c) than previous experiments. The bulk viscosity zeta (or "second viscosity" or "dilational viscosity") will be determined by measuring the attenuation length of sound alpha lambda and also measuring the frequency-dependence of the speed of sound. For these measurements, we developed a unique Helmholtz resonator and specialized electro-acoustic transducers. We describe the resonator, the transducers, their performance on Earth, and their expected performance in microgravity.

  17. The Amateur Scientist.

    ERIC Educational Resources Information Center

    Walker, Jearl

    1983-01-01

    Three physics experiments are described, minimizing difficulties for amateur experimenters. One experiment demonstrates the Doppler shift of light, converting the phenomenon into sound. The second measures Planck's constant. The third measures the universal gravitational constant, which does the same in Newton's theory of gravitation. (Author/JN)

  18. Profiling Canada's Families II.

    ERIC Educational Resources Information Center

    Vanier Inst. of the Family, Ottawa (Ontario).

    Noting that Canadians have witnessed profound demographic, economic, social, cultural, and technological changes over the last century and the need for sound demographic information for future planning, this report is the second to identify significant trends affecting Canada's families. Following an introductory section providing relevant…

  19. Neural plasticity associated with recently versus often heard objects.

    PubMed

    Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie

    2012-09-01

    In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    PubMed

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training. Copyright © 2017 the authors 0270-6474/17/375948-12$15.00/0.

  1. Prevalence of viral erythrocytic necrosis in Pacific herring and epizootics in Skagit Bay, Puget Sound, Washington.

    USGS Publications Warehouse

    Hershberger, P.K.; Elder, N.E.; Grady, C.A.; Gregg, J.L.; Pacheco, C.A.; Greene, C.; Rice, C.; Meyers, T.R.

    2009-01-01

    Epizootics of viral erythrocytic necrosis (VEN) occurred among juvenile Pacific herring Clupea pallasii in Skagit Bay, Puget Sound, Washington, during 2005-2007 and were characterized by high prevalences and intensities of cytoplasmic inclusion bodies within circulating erythrocytes. The prevalence of VEN peaked at 67% during the first epizootic in October 2005 and waned to 0% by August 2006. A second VEN epizootic occurred throughout the summer of 2007; this was characterized by disease initiation and perpetuation in the age-1, 2006 year-class, followed by involvement of the age-0, 2007 year-class shortly after the latter's metamorphosis to the juvenile stage. The disease was detected in other populations of juvenile Pacific herring throughout Puget Sound and Prince William Sound, Alaska, where the prevalences and intensities typically did not correspond to those observed in Skagit Bay. The persistence and recurrence of VEN epizootics indicate that the disease is probably common among juvenile Pacific herring throughout the eastern North Pacific Ocean, and although population-level impacts probably occur they are typically covert and not easily detected.

  2. Nonlinear wave fronts and ionospheric irregularities observed by HF sounding over a powerful acoustic source

    NASA Astrophysics Data System (ADS)

    Blanc, Elisabeth; Rickel, Dwight

    1989-06-01

    Different wave fronts affected by significant nonlinearities have been observed in the ionosphere by a pulsed HF sounding experiment at a distance of 38 km from the source point of a 4800-kg ammonium nitrate and fuel oil (ANFO) explosion on the ground. These wave fronts are revealed by partial reflections of the radio sounding waves. A small-scale irregular structure has been generated by a first wave front at the level of a sporadic E layer which characterized the ionosphere at the time of the experiment. The time scale of these fluctuations is about 1 to 2 s; its lifetime is about 2 min. Similar irregularities were also observed at the level of a second wave front in the F region. This structure appears also as diffusion on a continuous wave sounding at horizontal distances of the order of 200 km from the source. In contrast, a third front unaffected by irregularities may originate from the lowest layers of the ionosphere or from a supersonic wave front propagating at the base of the thermosphere. The origin of these structures is discussed.

  3. Integrating sensorimotor systems in a robot model of cricket behavior

    NASA Astrophysics Data System (ADS)

    Webb, Barbara H.; Harrison, Reid R.

    2000-10-01

    The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.

  4. Sounds, Behaviour, and Auditory Receptors of the Armoured Ground Cricket, Acanthoplus longipes

    PubMed Central

    Kowalski, Kerstin; Lakes-Harlan, Reinhard

    2010-01-01

    The auditory sensory system of the taxon Hetrodinae has not been studied previously. Males of the African armoured ground cricket, Acanthoplus longipes (Orthoptera: Tettigoniidae: Hetrodinae) produce a calling song that lasts for minutes and consists of verses with two pulses. About three impulses are in the first pulse and about five impulses are in the second pulse. In contrast, the disturbance stridulation consists of verses with about 14 impulses that are not separated in pulses. Furthermore, the inter-impulse intervals of both types of sounds are different, whereas verses have similar durations. This indicates that the neuronal networks for sound generation are not identical. The frequency spectrum peaks at about 15 kHz in both types of sounds, whereas the hearing threshold has the greatest sensitivity between 4 and 10 kHz. The auditory afferents project into the prothoracic ganglion. The foreleg contains about 27 sensory neurons in the crista acustica; the midleg has 18 sensory neurons, and the hindleg has 14. The auditory system is similar to those of other Tettigoniidae. PMID:20569136

  5. Cross-modal detection using various temporal and spatial configurations.

    PubMed

    Schirillo, James A

    2011-01-01

    To better understand temporal and spatial cross-modal interactions, two signal detection experiments were conducted in which an auditory target was sometimes accompanied by an irrelevant flash of light. In the first, a psychometric function for detecting a unisensory auditory target in varying signal-to-noise ratios (SNRs) was derived. Then auditory target detection was measured while an irrelevant light was presented with light/sound stimulus onset asynchronies (SOAs) between 0 and ±700 ms. When the light preceded the sound by 100 ms or was coincident, target detection (d') improved for low SNR conditions. In contrast, for larger SOAs (350 and 700 ms), the behavioral gain resulted from a change in both d' and response criterion (β). However, when the light followed the sound, performance changed little. In the second experiment, observers detected multimodal target sounds at eccentricities of ±8°, and ±24°. Sensitivity benefits occurred at both locations, with a larger change at the more peripheral location. Thus, both temporal and spatial factors affect signal detection measures, effectively parsing sensory and decision-making processes.

  6. Time-domain simulation of damped impacted plates. II. Numerical model and results.

    PubMed

    Lambourg, C; Chaigne, A; Matignon, D

    2001-04-01

    A time-domain model for the flexural vibrations of damped plates was presented in a companion paper [Part I, J. Acoust. Soc. Am. 109, 1422-1432 (2001)]. In this paper (Part II), the damped-plate model is extended to impact excitation, using Hertz's law of contact, and is solved numerically in order to synthesize sounds. The numerical method is based on the use of a finite-difference scheme of second order in time and fourth order in space. As a consequence of the damping terms, the stability and dispersion properties of this scheme are modified, compared to the undamped case. The numerical model is used for the time-domain simulation of vibrations and sounds produced by impact on isotropic and orthotropic plates made of various materials (aluminum, glass, carbon fiber and wood). The efficiency of the method is validated by comparisons with analytical and experimental data. The sounds produced show a high degree of similarity with real sounds and allow a clear recognition of each constitutive material of the plate without ambiguity.

  7. Sound waves in hadronic matter

    NASA Astrophysics Data System (ADS)

    Wilk, Grzegorz; Włodarczyk, Zbigniew

    2018-01-01

    We argue that recent high energy CERN LHC experiments on transverse momenta distributions of produced particles provide us new, so far unnoticed and not fully appreciated, information on the underlying production processes. To this end we concentrate on the small (but persistent) log-periodic oscillations decorating the observed pT spectra and visible in the measured ratios R = σdata(pT) / σfit (pT). Because such spectra are described by quasi-power-like formulas characterised by two parameters: the power index n and scale parameter T (usually identified with temperature T), the observed logperiodic behaviour of the ratios R can originate either from suitable modifications of n or T (or both, but such a possibility is not discussed). In the first case n becomes a complex number and this can be related to scale invariance in the system, in the second the scale parameter T exhibits itself log-periodic oscillations which can be interpreted as the presence of some kind of sound waves forming in the collision system during the collision process, the wave number of which has a so-called self similar solution of the second kind. Because the first case was already widely discussed we concentrate on the second one and on its possible experimental consequences.

  8. Optimization of sound absorbing performance for gradient multi-layer-assembled sintered fibrous absorbers

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Weiyong; Zhu, Jian

    2012-04-01

    The transfer matrix method, based on plane wave theory, of multi-layer equivalent fluid is employed to evaluate the sound absorbing properties of two-layer-assembled and three-layer-assembled sintered fibrous sheets (generally regarded as a kind of compound absorber or structures). Two objective functions which are more suitable for the optimization of sound absorption properties of multi-layer absorbers within the wider frequency ranges are developed and the optimized results of using two objective functions are also compared with each other. It is found that using the two objective functions, especially the second one, may be more helpful to exert the sound absorbing properties of absorbers at lower frequencies to the best of their abilities. Then the calculation and optimization of sound absorption properties of multi-layer-assembled structures are performed by developing a simulated annealing genetic arithmetic program and using above-mentioned objective functions. Finally, based on the optimization in this work the thoughts of the gradient design over the acoustic parameters- the porosity, the tortuosity, the viscous and thermal characteristic lengths and the thickness of each samples- of porous metals are put forth and thereby some useful design criteria upon the acoustic parameters of each layer of porous fibrous metals are given while applying the multi-layer-assembled compound absorbers in noise control engineering.

  9. Electroacoustic control of Rijke tube instability

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Huang, Lixi

    2017-11-01

    Unsteady heat release coupled with pressure fluctuation triggers the thermoacoustic instability which may damage a combustion chamber severely. This study demonstrates an electroacoustic control approach of suppressing the thermoacoustic instability in a Rijke tube by altering the wall boundary condition. An electrically shunted loudspeaker driver device is connected as a side-branch to the main tube via a small aperture. Tests in an impedance tube show that this device has sound absorption coefficient up to 40% under normal incidence from 100 Hz to 400 Hz, namely over two octaves. Experimental result demonstrates that such a broadband acoustic performance can effectively eliminate the Rijke-tube instability from 94 Hz to 378 Hz (when the tube length varies from 1.8 m to 0.9 m, the first mode frequency for the former is 94 Hz and the second mode frequency for the latter is 378 Hz). Theoretical investigation reveals that the devices act as a damper draining out sound energy through a tiny hole to eliminate the instability. Finally, it is also estimated based on the experimental data that small amount of sound energy is actually absorbed when the system undergoes a transition from the unstable to stable state if the contrpaol is activated. When the system is actually stabilized, no sound is radiated so no sound energy needs to be absorbed by the control device.

  10. Low-frequency sound speed and attenuation in sandy seabottom from long-range broadband acoustic measurements.

    PubMed

    Wan, Lin; Zhou, Ji-Xun; Rogers, Peter H

    2010-08-01

    A joint China-U.S. underwater acoustics experiment was conducted in the Yellow Sea with a very flat bottom and a strong and sharp thermocline. Broadband explosive sources were deployed both above and below the thermocline along two radial lines up to 57.2 km and a quarter circle with a radius of 34 km. Two inversion schemes are used to obtain the seabottom sound speed. One is based on extracting normal mode depth functions from the cross-spectral density matrix. The other is based on the best match between the calculated and measured modal arrival times for different frequencies. The inverted seabottom sound speed is used as a constraint condition to extract the seabottom sound attenuation by three methods. The first method involves measuring the attenuation coefficients of normal modes. In the second method, the seabottom sound attenuation is estimated by minimizing the difference between the theoretical and measured modal amplitude ratios. The third method is based on finding the best match between the measured and modeled transmission losses (TLs). The resultant seabottom attenuation, averaged over three independent methods, can be expressed as alpha=(0.33+/-0.02)f(1.86+/-0.04)(dB/m kHz) over a frequency range of 80-1000 Hz.

  11. Active control of sound radiation from a vibrating rectangular panel by sound sources and vibration inputs - An experimental comparison

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.; Hansen, C. H.; Snyder, S. D.

    1991-01-01

    Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.

  12. The Auditory Anatomy of the Minke Whale (Balaenoptera acutorostrata): A Potential Fatty Sound Reception Pathway in a Baleen Whale

    PubMed Central

    Yamato, Maya; Ketten, Darlene R; Arruda, Julie; Cramer, Scott; Moore, Kathleen

    2012-01-01

    Cetaceans possess highly derived auditory systems adapted for underwater hearing. Odontoceti (toothed whales) are thought to receive sound through specialized fat bodies that contact the tympanoperiotic complex, the bones housing the middle and inner ears. However, sound reception pathways remain unknown in Mysticeti (baleen whales), which have very different cranial anatomies compared to odontocetes. Here, we report a potential fatty sound reception pathway in the minke whale (Balaenoptera acutorostrata), a mysticete of the balaenopterid family. The cephalic anatomy of seven minke whales was investigated using computerized tomography and magnetic resonance imaging, verified through dissections. Findings include a large, well-formed fat body lateral, dorsal, and posterior to the mandibular ramus and lateral to the tympanoperiotic complex. This fat body inserts into the tympanoperiotic complex at the lateral aperture between the tympanic and periotic bones and is in contact with the ossicles. There is also a second, smaller body of fat found within the tympanic bone, which contacts the ossicles as well. This is the first analysis of these fatty tissues' association with the auditory structures in a mysticete, providing anatomical evidence that fatty sound reception pathways may not be a unique feature of odontocete cetaceans. Anat Rec, 2012. © 2012 Wiley Periodicals, Inc. PMID:22488847

  13. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    PubMed

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  14. Auditory perception of a human walker.

    PubMed

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  15. Reverberation enhances onset dominance in sound localization.

    PubMed

    Stecker, G Christopher; Moore, Travis M

    2018-02-01

    Temporal variation in sensitivity to sound-localization cues was measured in anechoic conditions and in simulated reverberation using the temporal weighting function (TWF) paradigm [Stecker and Hafter (2002). J. Acoust. Soc. Am. 112, 1046-1057]. Listeners judged the locations of Gabor click trains (4 kHz center frequency, 5-ms interclick interval) presented from an array of loudspeakers spanning 360° azimuth. Targets ranged ±56.25° across trials. Individual clicks within each train varied by an additional ±11.25° to allow TWF calculation by multiple regression. In separate conditions, sounds were presented directly or in the presence of simulated reverberation: 13 orders of lateral reflection were computed for a 10 m × 10 m room ( RT 60 ≊300 ms) and mapped to the appropriate locations in the loudspeaker array. Results reveal a marked increase in perceptual weight applied to the initial click in reverberation, along with a reduction in the impact of late-arriving sound. In a second experiment, target stimuli were preceded by trains of "conditioner" sounds with or without reverberation. Effects were modest and limited to the first few clicks in a train, suggesting that impacts of reverberant pre-exposure on localization may be limited to the processing of information from early reflections.

  16. Experimental validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, T. W.; Wu, X. F.

    1994-01-01

    This research report is presented in three parts. In the first part, acoustical analyses were performed on modes of vibration of the housing of a transmission of a gear test rig developed by NASA. The modes of vibration of the transmission housing were measured using experimental modal analysis. The boundary element method (BEM) was used to calculate the sound pressure and sound intensity on the surface of the housing and the radiation efficiency of each mode. The radiation efficiency of each of the transmission housing modes was then compared to theoretical results for a finite baffled plate. In the second part, analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise level radiated from the box. The FEM was used to predict the vibration, while the BEM was used to predict the sound intensity and total radiated sound power using surface vibration as the input data. Vibration predicted by the FEM model was validated by experimental modal analysis; noise predicted by the BEM was validated by measurements of sound intensity. Three types of results are presented for the total radiated sound power: sound power predicted by the BEM model using vibration data measured on the surface of the box; sound power predicted by the FEM/BEM model; and sound power measured by an acoustic intensity scan. In the third part, the structure used in part two was modified. A rib was attached to the top plate of the structure. The FEM and BEM were then used to predict structural vibration and radiated noise respectively. The predicted vibration and radiated noise were then validated through experimentation.

  17. Effect of ventriculectomy versus ventriculocordectomy on upper airway noise in draught horses with recurrent laryngeal neuropathy.

    PubMed

    Cramp, P; Derksen, F J; Stick, J A; Nickels, F A; Brown, K E; Robinson, P; Robinson, N E

    2009-11-01

    Little is known about the efficacy of bilateral ventriculectomy (VE) or bilateral ventriculocordectomy (VCE) in draught horses. To compare the effect of VE and VCE on upper airway noise in draught horses with recurrent laryngeal neuropathy (RLN) by use of quantitative sound analysis techniques. In competitive draught horses with grade 4 RLN, VE and VCE reduce upper airway noise during exercise, but VCE is more effective. Thirty competitive hitch or pulling draught horses with grade 4 RLN were evaluated for upper airway sound during exercise. Respiratory rate (RR), inspiratory (Ti) and expiratory time (Te), the ratio between Ti and Te (Ti/Te), inspiratory (Sli) and expiratory sound levels (Sle), the ratio between Sli and Sle (Sli/Sle), and peak sound intensity of the second formant (F2) were calculated. Eleven horses were treated with VE and 19 with VCE. After 90 days of voice and physical rest and 30 days of work, the horses returned for post operative upper airway sound evaluation and resting videoendoscopy. VE significantly reduced Ti/Te, Sli, Sli/Sle and the sound intensity of F2. Respiratory rate, Ti, Te and Sle were unaffected by VE. VCE significantly reduced Ti/Te, Ti, Te, Sli, Sli/Sle and the sound intensity of F2, while RR and Sle were unaffected. The reduction in sound intensity of F2 following VCE was significantly greater than following VE. After VE and VCE, 7/11 (64%) and 15/18 (83%) owners, respectively, concluded that the surgery improved upper airway sound in their horses sufficiently for successful competition. VE and VCE significantly reduce upper airway noise and indices of airway obstruction in draught horses with RLN, but VCE is more effective than VE. The procedures have few post operative complications. VCE is recommended as the preferred treatment for RLN in draught horses. Further studies are required to evaluate the longevity of the procedure's results.

  18. Sounding Right.

    ERIC Educational Resources Information Center

    Burling, Robbins

    Aspects of second language learning and instruction are explored in order to develop a rationale for a comprehension-based approach to language instruction. Eight characteristic pedagogical assumptions are critically examined, including assumptions regarding the role of grammar, age differences in learning ability, the priority given to each of…

  19. The Electric Company Writers' Notebook.

    ERIC Educational Resources Information Center

    Children's Television Workshop, New York, NY.

    This handbook outlines the curriculum objectives for the children's television program, "The Electric Company." The first portion of the text delineates strategies for teaching symbol/sound analysis, including units on blends, letter groups, and word structure. A second section addresses strategies for reading for meaning, including…

  20. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    PubMed

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  1. Contrast of hemispheric lateralization for oro-facial movements between learned attention-getting sounds and species-typical vocalizations in chimpanzees: Extension in a second colony

    PubMed Central

    Wallez, Catherine; Schaeffer, Jennifer; Meguerditchian, Adrien; Vauclair, Jacques; Schapiro, Steven J.; Hopkins, William D.

    2013-01-01

    Studies involving oro-facial asymmetries in nonhuman primates have largely demonstrated a right hemispheric dominance for communicative signals and conveyance of emotional information. A recent study on chimpanzee reported the first evidence of significant left-hemispheric dominance when using attention-getting sounds and rightward bias for species-typical vocalizations (Losin, Russell, Freeman, Meguerditchian, Hopkins & Fitch, 2008). The current study sought to extend the findings from Losin et al. (2008) with additional oro-facial assessment in a new colony of chimpanzees. When combining the two populations, the results indicated a consistent leftward bias for attention-getting sounds and a right lateralization for species-typical vocalizations. Collectively, the results suggest that both voluntary- controlled oro-facial and gestural communication might share the same left-hemispheric specialization and might have coevolved into a single integrated system present in a common hominid ancestor. PMID:22867751

  2. Evaluation of cardiac auscultation skills in pediatric residents.

    PubMed

    Kumar, Komal; Thompson, W Reid

    2013-01-01

    Auscultation skills are in decline, but few studies have shown which specific aspects are most difficult for trainees. We evaluated individual aspects of cardiac auscultation among pediatric residents using recorded heart sounds to determine which elements pose the most difficulty. Auscultation proficiency was assessed among 34 trainees following a pediatric cardiology rotation using an open-set format evaluation module, similar to the actual clinical auscultation description process. Diagnostic accuracy for distinguishing normal from abnormal cases was 73%. Findings most commonly correctly identified included pathological systolic and diastolic murmurs and widely split second heart sounds. Those least likely to be identified included continuous murmurs and clicks. Accuracy was low for identifying specific diagnoses. Given time constraints for clinical skills teaching, this suggests that focusing on distinguishing normal from abnormal heart sounds and murmurs instead of making specific diagnoses may be a more realistic goal for pediatric resident auscultation training.

  3. Factors affecting measured aircraft sound levels in the vicinity of start-of-takeoff roll

    NASA Astrophysics Data System (ADS)

    Richard, Horonjeff; Fleming, Gregg G.; Rickley, Edward J.; Connor, Thomas L.

    This paper presents the findings of a recently conducted measurement and analysis program of jet transport aircraft sound levels in the vicinity of the star-of-takeoff roll. The purpose of the program was two-fold: (1) to evaluate the computational accuracy of the Federal Aviation Administration's Integrated Noise Model (INM) in the vicinity of start-of-takeoff roll with a recently updated database (INM 3.10), and (2) to provide guidance for future model improvements. Focusing on the second of these two goals, this paper examines several factors affecting Sound Exposure Levels (SELs) in the hemicircular area behind the aircraft brake release point at the start-of-takeoff. In addition to the aircraft type itself, these factors included the geometric relationship of the measurement site to the runway, the wind velocity (speed and direction), aircraft grow weight, and start-of-roll mode (static or rolling start).

  4. Method for noninvasive determination of acoustic properties of fluids inside pipes

    DOEpatents

    None

    2016-08-02

    A method for determining the composition of fluids flowing through pipes from noninvasive measurements of acoustic properties of the fluid is described. The method includes exciting a first transducer located on the external surface of the pipe through which the fluid under investigation is flowing, to generate an ultrasound chirp signal, as opposed to conventional pulses. The chirp signal is received by a second transducer disposed on the external surface of the pipe opposing the location of the first transducer, from which the transit time through the fluid is determined and the sound speed of the ultrasound in the fluid is calculated. The composition of a fluid is calculated from the sound speed therein. The fluid density may also be derived from measurements of sound attenuation. Several signal processing approaches are described for extracting the transit time information from the data with the effects of the pipe wall having been subtracted.

  5. Prediction of Sound Waves Propagating Through a Nozzle Without/With a Shock Wave Using the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The benchmark problems in Category 1 (Internal Propagation) of the third Computational Aeroacoustics (CAA) Work-shop sponsored by NASA Glenn Research Center are solved using the space-time conservation element and solution element (CE/SE) method. The first problem addresses the propagation of sound waves through a nearly choked transonic nozzle. The second one concerns shock-sound interaction in a supersonic nozzle. A quasi one-dimension CE/SE Euler solver for a nonuniform mesh is developed and employed to solve both problems. Numerical solutions are compared with the analytical solution for both problems. It is demonstrated that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple way. Furthermore, the simple nonreflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well.

  6. Second-sound studies of coflow and counterflow of superfluid {sup 4}He in channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varga, Emil; Skrbek, L.; Babuin, Simone, E-mail: babuin@fzu.cz

    2015-06-15

    We report a comprehensive study of turbulent superfluid {sup 4}He flow through a channel of square cross section. We study for the first time two distinct flow configurations with the same apparatus: coflow (normal and superfluid components move in the same direction), and counterflow (normal and superfluid components move in opposite directions). We realise also a variation of counterflow with the same relative velocity, but where the superfluid component moves while there is no net flow of the normal component through the channel, i.e., pure superflow. We use the second-sound attenuation technique to measure the density of quantised vortex linesmore » in the temperature range 1.2 K ≲ T ≲ T{sub λ} ≈ 2.18 K and for flow velocities from about 1 mm/s up to almost 1 m/s in fully developed turbulence. We find that both the steady-state and temporal decay of the turbulence significantly differ in the three flow configurations, yielding an interesting insight into two-fluid hydrodynamics. In both pure superflow and counterflow, the same scaling of vortex line density with counterflow velocity is observed, L∝V{sub cf}{sup 2}, with a pronounced temperature dependence; in coflow instead, the vortex line density scales with velocity as L ∝ V{sup 3/2} and is temperature independent; we provide theoretical explanations for these observations. Further, we develop a new promising technique to use different second-sound resonant modes to probe the spatial distribution of quantised vortices in the direction perpendicular to the flow. Preliminary measurements indicate that coflow is less homogeneous than counterflow/superflow, with a denser concentration of vortices between the centre of the channel and its walls.« less

  7. Automotive Exterior Noise Optimization Using Grey Relational Analysis Coupled with Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shuming; Wang, Dengfeng; Liu, Bo

    This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.

  8. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  9. Illustrations and supporting texts for sound standing waves of air columns in pipes in introductory physics textbooks

    NASA Astrophysics Data System (ADS)

    Zeng, Liang; Smith, Chris; Poelzer, G. Herold; Rodriguez, Jennifer; Corpuz, Edgar; Yanev, George

    2014-12-01

    In our pilot studies, we found that many introductory physics textbook illustrations with supporting text for sound standing waves of air columns in open-open, open-closed, and closed-closed pipes inhibit student understanding of sound standing wave phenomena due to student misunderstanding of how air molecules move within these pipes. Based on the construct of meaningful learning from cognitive psychology and semiotics, a quasiexperimental study was conducted to investigate the comparative effectiveness of two alternative approaches to student understanding: a traditional textbook illustration approach versus a newly designed air molecule motion illustration approach. Thirty volunteer students from introductory physics classes were randomly assigned to two groups of 15 each. Both groups were administered a presurvey. Then, group A read the air molecule motion illustration handout, and group B read a traditional textbook illustration handout; both groups were administered postsurveys. Subsequently, the procedure was reversed: group B read the air molecule motion illustration handout and group A read the traditional textbook illustration handout. This was followed by a second postsurvey along with an exit research questionnaire. The study found that the majority of students experienced meaningful learning and stated that they understood sound standing wave phenomena significantly better using the air molecule motion illustration approach. This finding provides a method for physics education researchers to design illustrations for abstract sound standing wave concepts, for publishers to improve their illustrations with supporting text, and for instructors to facilitate deeper learning in their students on sound standing waves.

  10. Taiwan Space Programs

    NASA Astrophysics Data System (ADS)

    Liu, Jann-Yenq

    Taiwan space programs consist of FORMOSAT-1, -2, and -3, sounding rockets, and international cooperation. FORMOSAT-1, a low-earth-orbit (LEO) scientific experimental satellite, was launched on January 26, 1999. It circulates with an altitude of 600 km and 35 degree inclination around the Earth every 97 minutes, transmitting collected data to Taiwan's receiving stations approximately six times a day. The major mission of FORMOSAT-1 includes three scientific experiments for measuring the effects of ionospheric plasma and electrodynamics, taking the ocean color image and conducting Ka-band communication experiment. The FORMOSAT- 1 mission was ended by June 15, 2004. FORMOSAT-2, launched on May 21, 2004 onto the Sun-synchronous orbit located at 891 km above ground. The main mission of FORMOSAT-2 is to conduct remote sensing imaging over Taiwan and on terrestrial and oceanic regions of the entire earth. The images captured by FORMOSAT-2 during daytime can be used for land distribution, natural resources research, environmental protection, disaster prevention and rescue work etc. When the satellite travels to the eclipsed zone, it observes natural phenomena of lighting in the upper atmosphere. FORMOSAT-3 is an international collaboration project between Taiwan and the US to develop advanced technology for the real-time monitoring of the global climate. This project is also named Constellation Observing System for Meteorology, Ionosphere and Climate, or FORMOSAT-3/COSMIC for short. Six micro-satellites were launched on 15 April 2007 and eventually placed into six different orbits at 700 800 kilometer above the earth ground. These satellites orbit around the earth to form a LEO constellation that receives signals transmitted by the 24 US GPS satellites. The satellite observation covers the entire global atmosphere and ionosphere, providing over 2,500 global sounding data per day. These data distribute uniformly over the earth's atmosphere. The global climate information collection and analysis can be completed in three hours while the sounding data is updated every 90 minutes for updating weather forecast. In addition, this system can also be used as the long-term climate change research, interactive ionosphere monitoring, global space weather forecast, and earth gravity research. From 1997 to 2003, there are three launches of sounding rockets. To compliment the second phase of Taiwan's national space technology long-term development plan, the sounding rocket space exploration project was established. The timeframe of the second phase sounding rocket project is 15 years, from January 2004 to December 2018, and 10 15 sounding rockets will be launched during this time period. In this paper, the current status and results of the programs are presented in detail.

  11. Estimation of Electron Density profile Using the Propagation Characteristics of Radio Waves by S-520-29 Sounding Rocket

    NASA Astrophysics Data System (ADS)

    Itaya, K.; Ishisaka, K.; Ashihara, Y.; Abe, T.; Kumamoto, A.; Kurihara, J.

    2015-12-01

    S-520-29 sounding rocket experiment was carried out at Uchinoura Space Center (USC) at 19:10 JST on 17 August, 2014. The purpose of this sounding rocket experiments is observation of sporadic E layer that appears in the lower ionosphere at near 100km. Three methods were used in order to observe the sporadic E layer. The first method is an optical method that observe the light of metal ion emitted by the resonance scattering in sporadic E layer using the imager. The second method is observation of characteristic of radio wave propagation that the LF/MF band radio waves transmitted from the ground. The third method is measuring the electron density in the vicinity of sounding rocket using the fast Langmuir probe and the impedance probe. We analyze the propagation characteristics of radio wave in sporadic E layer appeared from the results of the second method observation. This rocket was equipped with LF/MF band radio receiver for observe the LF/MF band radio waves in rocket flight. Antenna of LF/MF band radio receiver is composed of three axis loop antenna. LF/MF band radio receiver receives three radio waves of 873kHz (JOGB), 666kHz (JOBK), 60kHz (JJY) from the ground. 873kHz and 60kHz radio waves are transmitting from north side, and 666kHz radio waves are transmitting from the east side to the trajectory of the rocket. In the sounding rocket experiment, LF/MF band radio receiver was working properly. We have completed the observation of radio wave intensity. We analyze the observation results using a Doppler shift calculations by frequency analysis. Radio waves received by the sounding rocket include the influences of Doppler shift by polarization and the direction of rocket spin and the magnetic field of the Earth. So received radio waves that are separate into characteristics waves using frequency analysis. Then we calculate the Doppler shift from the separated data. As a result, 873kHz, 666kHz radio waves are reflected by the ionosphere. 60kHz wave was able to propagate in ionosphere because wavelength of 60kHz was longer than the thickness of the sporadic E layer. In this study, we explain the result of LF/MF band radio receiver observations and the electron density of the ionosphere using frequency analysis by S-520-29 sounding rocket experiment.

  12. Seismic and Biological Sources of Ambient Ocean Sound

    NASA Astrophysics Data System (ADS)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.

  13. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    PubMed

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. 75 FR 53961 - Puget Sound Energy, Inc., Notice of Application for Amendment of License and Soliciting Comments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... turbine-generator units in the new powerhouse, (3) install a new 1,500 cubic foot per second (cfs) bypass... turbine-generator unit would have the same 1,500 cfs hydraulic capacity and the same 30 megawatts...

  15. Acoustic Effects in Classical Nucleation Theory

    NASA Technical Reports Server (NTRS)

    Baird, J. K.; Su, C.-H.

    2017-01-01

    The effect of sound wave oscillations on the rate of nucleation in a parent phase can be calculated by expanding the free energy of formation of a nucleus of the second phase in powers of the acoustic pressure. Since the period of sound wave oscillation is much shorter than the time scale for nucleation, the acoustic effect can be calculated as a time average of the free energy of formation of the nucleus. The leading non-zero term in the time average of the free energy is proportional to the square of the acoustic pressure. The Young-Laplace equation for the surface tension of the nucleus can be used to link the time average of the square of the pressure in the parent phase to its time average in the nucleus of the second phase. Due to the surface tension, the pressure in the nuclear phase is higher than the pressure in the parent phase. The effect is to lower the free energy of formation of the nucleus and increase the rate of nucleation.

  16. Calibrating an Ionosonde for Ionospheric Attenuation Measurements.

    PubMed

    Gilli, Lorenzo; Sciacca, Umberto; Zuccheretti, Enrico

    2018-05-15

    Vertical ionospheric soundings have been performed at almost all ionospheric observatories with little attention to measuring the attenuation of the signal between transmission and reception. When the absorption has been determined, this has been achieved by comparing the received power after the first and second reflections, but this method has some limitations due to the unknown reflection coefficient of the ground and the non-continuous presence of the second reflection. This paper deals with a different method based on precise calibration of the sounding system, allowing determination of absolute signal attenuation after a single reflection. This approach is affected by a systematic error due to imperfect calibration of the antennas, but when the focus of interest is to measure a trend over a specified period, it is very accurate. The article describes how calibration was implemented, the measurement output formats, and finally it presents some results from a meaningful set of measurements in order to demonstrate what this method can accomplish.

  17. Student's Second-Language Grade May Depend on Classroom Listening Position.

    PubMed

    Hurtig, Anders; Sörqvist, Patrik; Ljung, Robert; Hygge, Staffan; Rönnberg, Jerker

    2016-01-01

    The purpose of this experiment was to explore whether listening positions (close or distant location from the sound source) in the classroom, and classroom reverberation, influence students' score on a test for second-language (L2) listening comprehension (i.e., comprehension of English in Swedish speaking participants). The listening comprehension test administered was part of a standardized national test of English used in the Swedish school system. A total of 125 high school pupils, 15 years old, participated. Listening position was manipulated within subjects, classroom reverberation between subjects. The results showed that L2 listening comprehension decreased as distance from the sound source increased. The effect of reverberation was qualified by the participants' baseline L2 proficiency. A shorter reverberation was beneficial to participants with high L2 proficiency, while the opposite pattern was found among the participants with low L2 proficiency. The results indicate that listening comprehension scores-and hence students' grade in English-may depend on students' classroom listening position.

  18. Student’s Second-Language Grade May Depend on Classroom Listening Position

    PubMed Central

    Sörqvist, Patrik; Ljung, Robert; Hygge, Staffan; Rönnberg, Jerker

    2016-01-01

    The purpose of this experiment was to explore whether listening positions (close or distant location from the sound source) in the classroom, and classroom reverberation, influence students’ score on a test for second-language (L2) listening comprehension (i.e., comprehension of English in Swedish speaking participants). The listening comprehension test administered was part of a standardized national test of English used in the Swedish school system. A total of 125 high school pupils, 15 years old, participated. Listening position was manipulated within subjects, classroom reverberation between subjects. The results showed that L2 listening comprehension decreased as distance from the sound source increased. The effect of reverberation was qualified by the participants’ baseline L2 proficiency. A shorter reverberation was beneficial to participants with high L2 proficiency, while the opposite pattern was found among the participants with low L2 proficiency. The results indicate that listening comprehension scores—and hence students’ grade in English—may depend on students’ classroom listening position. PMID:27304980

  19. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. On doing two things at once: dolphin brain and nose coordinate sonar clicks, buzzes and emotional squeals with social sounds during fish capture.

    PubMed

    Ridgway, Sam; Samuelson Dibble, Dianna; Van Alstyne, Kaitlin; Price, DruAnn

    2015-12-01

    Dolphins fishing alone in open waters may whistle without interrupting their sonar clicks as they find and eat or reject fish. Our study is the first to match sound and video from the dolphin with sound and video from near the fish. During search and capture of fish, free-swimming dolphins carried cameras to record video and sound. A hydrophone in the far field near the fish also recorded sound. From these two perspectives, we studied the time course of dolphin sound production during fish capture. Our observations identify the instant of fish capture. There are three consistent acoustic phases: sonar clicks locate the fish; about 0.4 s before capture, the dolphin clicks become more rapid to form a second phase, the terminal buzz; at or just before capture, the buzz turns to an emotional squeal (the victory squeal), which may last 0.2 to 20 s after capture. The squeals are pulse bursts that vary in duration, peak frequency and amplitude. The victory squeal may be a reflection of emotion triggered by brain dopamine release. It may also affect prey to ease capture and/or it may be a way to communicate the presence of food to other dolphins. Dolphins also use whistles as communication or social sounds. Whistling during sonar clicking suggests that dolphins may be adept at doing two things at once. We know that dolphin brain hemispheres may sleep independently. Our results suggest that the two dolphin brain hemispheres may also act independently in communication. © 2015. Published by The Company of Biologists Ltd.

  1. Experiencing Earth's inaudible symphony

    NASA Astrophysics Data System (ADS)

    Marlton, Graeme; Charlton-Perez, Andrew; Harrison, Giles; Robson, Juliet

    2017-04-01

    Everyday the human body is exposed to thousands of different sounds; smartphones, music, cars and overhead aircraft to name a few. There are some sounds however which we cannot hear as they are below our range of hearing, sound at this level is known as infrasound and is of very low frequency. Such examples of infrasound are the sounds made by glaciers and volcanos, distant mining activities and the sound of the ocean. These sounds are emitted by these sources constantly all over the world and are recorded at infrasound stations, thus providing a recording of Earth's inaudible symphony. The aim of this collaboration between artists and scientists is to create a proof of concept immersive experience in which members of the public are invited to experience and understand infrasound. Participants will sit in an installation and be shown images of natural infrasound sources whilst their seat is vibrated at with an amplitude modulated version of the original infrasound wave. To further enhance the experience, subwoofers will play the same amplitude modulated soundwave to place the feeling of the infrasound wave passing through the installation. Amplitude modulation is performed so that a vibration is played at a frequency that can be felt by the human body but its amplitude varies at the frequency of the infrasound wave. The aim of the project is to see how humans perceive sounds that can't be heard and many did not know were there. The second part of the project is educational in which that this installation can be used to educate the general public about infrasound and its scientific uses. A simple demonstration for this session could be the playing of amplitude modulated infrasound wave that can be heard as opposed to felt as the transport of an installation at this is not possible and the associated imagery.

  2. Auditory observation of stepping actions can cue both spatial and temporal components of gait in Parkinson׳s disease patients.

    PubMed

    Young, William R; Rodger, Matthew W M; Craig, Cathy M

    2014-05-01

    A common behavioural symptom of Parkinson׳s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid 'action-related' sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue. The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds. Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds. The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Sonic Kayaks: Environmental monitoring and experimental music by citizens.

    PubMed

    Griffiths, Amber G F; Kemp, Kirsty M; Matthews, Kaffe; Garrett, Joanne K; Griffiths, David J

    2017-11-01

    The Sonic Kayak is a musical instrument used to investigate nature and developed during open hacklab events. The kayaks are rigged with underwater environmental sensors, which allow paddlers to hear real-time water temperature sonifications and underwater sounds, generating live music from the marine world. Sensor data is also logged every second with location, time and date, which allows for fine-scale mapping of water temperatures and underwater noise that was previously unattainable using standard research equipment. The system can be used as a citizen science data collection device, research equipment for professional scientists, or a sound art installation in its own right.

  4. Sonic Kayaks: Environmental monitoring and experimental music by citizens

    PubMed Central

    Kemp, Kirsty M.; Matthews, Kaffe; Garrett, Joanne K.; Griffiths, David J.

    2017-01-01

    The Sonic Kayak is a musical instrument used to investigate nature and developed during open hacklab events. The kayaks are rigged with underwater environmental sensors, which allow paddlers to hear real-time water temperature sonifications and underwater sounds, generating live music from the marine world. Sensor data is also logged every second with location, time and date, which allows for fine-scale mapping of water temperatures and underwater noise that was previously unattainable using standard research equipment. The system can be used as a citizen science data collection device, research equipment for professional scientists, or a sound art installation in its own right. PMID:29190283

  5. Propagation of high amplitude higher order sounds in slightly soft rectangular ducts, carrying mean flow

    NASA Technical Reports Server (NTRS)

    Wang, K. S.; Vaidya, P. G.

    1975-01-01

    The resonance expansion method, developed to study the propagation of sound in rigid rectangular ducts is applied to the case of slightly soft ducts. Expressions for the generation and decay of various harmonics are obtained. The effect of wall admittance is seen through a dissipation function in the system of nonlinear differential equations, governing the generation of harmonics. As the wall admittance increases, the resonance is reduced. For a given wall admittance this phenomenon is stronger at higher input intensities. Both the first and second order solutions are obtained and the results are extended to the case of ducts having mean flow.

  6. An Improved Theoretical Aerodynamic Derivatives Computer Program for Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Barrowman, J. S.; Fan, D. N.; Obosu, C. B.; Vira, N. R.; Yang, R. J.

    1979-01-01

    The paper outlines a Theoretical Aerodynamic Derivatives (TAD) computer program for computing the aerodynamics of sounding rockets. TAD outputs include normal force, pitching moment and rolling moment coefficient derivatives as well as center-of-pressure locations as a function of the flight Mach number. TAD is applicable to slender finned axisymmetric vehicles at small angles of attack in subsonic and supersonic flows. TAD improvement efforts include extending Mach number regions of applicability, improving accuracy, and replacement of some numerical integration algorithms with closed-form integrations. Key equations used in TAD are summarized and typical TAD outputs are illustrated for a second-stage Tomahawk configuration.

  7. Second Language Acquisition and Schizophrenia

    ERIC Educational Resources Information Center

    Dugan, James E.

    2014-01-01

    Schizophrenia is a complex mental disorder that results in language-related symptoms at various discourse levels, ranging from semantics (e.g. inventing words and producing nonsensical strands of similar-sounding words) to pragmatics and higher-level functioning (e.g. too little or too much information given to interlocutors, and tangential…

  8. Discipline in the School. Second Edition.

    ERIC Educational Resources Information Center

    Hartwig, Eric P.; Ruesch, Gary M.

    This book is intended to assist in the formulation, implementation, and evaluation of more effective, legally sound disciplinary policies and procedures for all students in school. Case histories, court decisions, literature reviews, and positive education practices regarding the use of properly created intervention plans are available in many…

  9. Bid opening report : Federal-aid highway construction contracts : first six months 1999

    DOT National Transportation Integrated Search

    1996-05-01

    This second volume of the draft final report focuses on specialized travel trends in the Puget Sound panel data from 1989 through 1993. Trips were catagorized by purpose and mode, with each trip of each wave characterized by three variables: total tr...

  10. 3. Context view includes Building 78 (second from left) and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Context view includes Building 78 (second from left) and Building 59 (partially seen at right edge). Camera is pointed WSW along Farragut Avenue. Buildings on left side of street are, from left: Building 38, Building 78 and Building 431. Crane No. 80 is at Drydock No. 1. Buildings on right side of street are, from right: Building 59 (with porch posts) and Building 856 (two sections). - Puget Sound Naval Shipyard, Administration Building, Farragut Avenue, Bremerton, Kitsap County, WA

  11. The Advanced Technology Microwave Sounder (ATMS): First Year On-Orbit

    NASA Technical Reports Server (NTRS)

    Kim, Edward J.

    2012-01-01

    The Advanced Technology Microwave Sounder (ATMS) is a new satellite microwave sounding sensor designed to provide operational weather agencies with atmospheric temperature and moisture profile information for global weather forecasting and climate applications. A TMS will continue the microwave sounding capabilities first provided by its predecessors, the Microwave Sounding Unit (MSU) and Advanced Microwave Sounding Unit (AMSU). The first flight unit was launched a year ago in October, 2011 aboard the Suomi-National Polar-Orbiting Partnership (S-NPP) satellite, part of the new Joint Polar-Orbiting Satellite System (JPSS). Microwave soundings by themselves are the highest-impact input data used by Numerical Weather Prediction models; and A TMS, when combined with the Cross-track Infrared Sounder (CrIS), forms the Cross-track Infrared and Microwave Sounding Suite (CrIMSS). The microwave soundings help meet sounding requirements under cloudy sky conditions and provide key profile information near the surface. ATMS was designed & built by Aerojet Corporation in Azusa, California, (now Northrop Grumman Electronic Systems). It has 22 channels spanning 23-183 GHz, closely following the channel set of the MSU, AMSU-AI/2, AMSU-B, Microwave Humidity Sounder (MHS), and Humidity Sounder for Brazil (HSB). It continues their cross-track scanning geometry, but for the first time, provides Nyquist sample spacing. All this is accomplished with approximately V. the volume, Y, the mass, and Y, the power of the three AMSUs. A description will be given of its performance from its first year of operation as determined by post-launch calibration activities. These activities include radiometric calibration using the on-board warm targets and cold space views, and geolocation determination. Example imagery and zooms of specific weather events will be shown. The second ATMS flight model is currently under construction and planned for launch on the "Jl" satellite of the JPSS program in approximately 2016. Additional units are expected on the J2 and 13 satellites, as well as potentially on future European METOP satellites.

  12. Difficulty in learning similar-sounding words: a developmental stage or a general property of learning?

    PubMed Central

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning mapping at that age (Stager & Werker, 1997). However, it is unclear whether this difficulty arises from developmental limitations of young infants (e.g., poorer working memory) or whether it is an intrinsic part of the initial word learning, L1 and L2 alike. Here we show that adults of particular L1 backgrounds—just like young infants—have difficulty learning similar-sounding L2 words that they can nevertheless discriminate perceptually. This suggests that the early stages of word learning, whether L1 or L2, intrinsically involve difficulty in mapping similar-sounding words onto referents. We argue that this is due to an interaction between two main factors: (1) memory limitations that pose particular challenges for highly similar-sounding words, and (2) uncertainty regarding the language's phonetic categories, as these are being learned concurrently with words. Overall, our results show that vocabulary acquisition in infancy and in adulthood share more similarities than previously thought, thus supporting the existence of common learning mechanisms that operate throughout the lifespan. PMID:26962959

  13. Echolocation versus echo suppression in humans

    PubMed Central

    Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz

    2013-01-01

    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105

  14. Observationally constrained modeling of sound in curved ocean internal waves: examination of deep ducting and surface ducting at short range.

    PubMed

    Duda, Timothy F; Lin, Ying-Tsong; Reeder, D Benjamin

    2011-09-01

    A study of 400 Hz sound focusing and ducting effects in a packet of curved nonlinear internal waves in shallow water is presented. Sound propagation roughly along the crests of the waves is simulated with a three-dimensional parabolic equation computational code, and the results are compared to measured propagation along fixed 3 and 6 km source/receiver paths. The measurements were made on the shelf of the South China Sea northeast of Tung-Sha Island. Construction of the time-varying three-dimensional sound-speed fields used in the modeling simulations was guided by environmental data collected concurrently with the acoustic data. Computed three-dimensional propagation results compare well with field observations. The simulations allow identification of time-dependent sound forward scattering and ducting processes within the curved internal gravity waves. Strong acoustic intensity enhancement was observed during passage of high-amplitude nonlinear waves over the source/receiver paths, and is replicated in the model. The waves were typical of the region (35 m vertical displacement). Two types of ducting are found in the model, which occur asynchronously. One type is three-dimensional modal trapping in deep ducts within the wave crests (shallow thermocline zones). The second type is surface ducting within the wave troughs (deep thermocline zones). © 2011 Acoustical Society of America

  15. Dimensions Underlying the Perceived Similarity of Acoustic Environments

    PubMed Central

    Aletta, Francesco; Axelsson, Östen; Kang, Jian

    2017-01-01

    Scientific research on how people perceive or experience and/or understand the acoustic environment as a whole (i.e., soundscape) is still in development. In order to predict how people would perceive an acoustic environment, it is central to identify its underlying acoustic properties. This was the purpose of the present study. Three successive experiments were conducted. With the aid of 30 university students, the first experiment mapped the underlying dimensions of perceived similarity among 50 acoustic environments, using a visual sorting task of their spectrograms. Three dimensions were identified: (1) Distinguishable–Indistinguishable sound sources, (2) Background–Foreground sounds, and (3) Intrusive–Smooth sound sources. The second experiment was aimed to validate the results from Experiment 1 by a listening experiment. However, a majority of the 10 expert listeners involved in Experiment 2 used a qualitatively different approach than the 30 university students in Experiment 1. A third experiment was conducted in which 10 more expert listeners performed the same task as per Experiment 2, with spliced audio signals. Nevertheless, Experiment 3 provided a statistically significantly worse result than Experiment 2. These results suggest that information about the meaning of the recorded sounds could be retrieved in the spectrograms, and that the meaning of the sounds may be captured with the aid of holistic features of the acoustic environment, but such features are still unexplored and further in-depth research is needed in this field. PMID:28747894

  16. Changes in room acoustics elicit a Mismatch Negativity in the absence of overall interaural intensity differences.

    PubMed

    Frey, Johannes Daniel; Wendt, Mike; Löw, Andreas; Möller, Stephan; Zölzer, Udo; Jacobsen, Thomas

    2017-02-15

    Changes in room acoustics provide important clues about the environment of sound source-perceiver systems, for example, by indicating changes in the reflecting characteristics of surrounding objects. To study the detection of auditory irregularities brought about by a change in room acoustics, a passive oddball protocol with participants watching a movie was applied in this study. Acoustic stimuli were presented via headphones. Standards and deviants were created by modelling rooms of different sizes, keeping the values of the basic acoustic dimensions (e.g., frequency, duration, sound pressure, and sound source location) as constant as possible. In the first experiment, each standard and deviant stimulus consisted of sequences of three short sounds derived from sinusoidal tones, resulting in three onsets during each stimulus. Deviant stimuli elicited a Mismatch Negativity (MMN) as well as two additional negative deflections corresponding to the three onset peaks. In the second experiment, only one sound was used; the stimuli were otherwise identical to the ones used in the first experiment. Again, an MMN was observed, followed by an additional negative deflection. These results provide further support for the hypothesis of automatic detection of unattended changes in room acoustics, extending previous work by demonstrating the elicitation of an MMN by changes in room acoustics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Acquired word deafness, and the temporal grain of sound representation in the primary auditory cortex.

    PubMed

    Phillips, D P; Farmer, M E

    1990-11-15

    This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.

  18. Acoustic detection of pneumothorax

    NASA Astrophysics Data System (ADS)

    Mansy, Hansen A.; Royston, Thomas J.; Balk, Robert A.; Sandler, Richard H.

    2003-04-01

    This study aims at investigating the feasibility of using low-frequency (<2000 Hz) acoustic methods for medical diagnosis. Several candidate methods of pneumothorax detection were tested in dogs. In the first approach, broadband acoustic signals were introduced into the trachea during end-expiration and transmitted waves were measured at the chest surface. Pneumothorax was found to consistently decrease pulmonary acoustic transmission in the 200-1200-Hz frequency band, while less change was observed at lower frequencies (p<0.0001). The ratio of acoustic energy between low (<220 Hz) and mid (550-770 Hz) frequency bands was significantly different in the control (healthy) and pneumothorax states (p<0.0001). The second approach measured breath sounds in the absence of an external acoustic input. Pneumothorax was found to be associated with a preferential reduction of sound amplitude in the 200- to 700-Hz range, and a decrease of sound amplitude variation (in the 300 to 600-Hz band) during the respiration cycle (p<0.01 for each). Finally, chest percussion was implemented. Pneumothorax changed the frequency and decay rate of percussive sounds. These results imply that certain medical conditions may be reliably detected using appropriate acoustic measurements and analysis. [Work supported by NIH/NHLBI #R44HL61108.

  19. Phonotactic flight of the parasitoid fly Emblemasoma auditrix (Diptera: Sarcophagidae).

    PubMed

    Tron, Nanina; Lakes-Harlan, Reinhard

    2017-01-01

    The parasitoid fly Emblemasoma auditrix locates its hosts using acoustic cues from sound producing males of the cicada Okanagana rimosa. Here, we experimentally analysed the flight path of the phonotaxis from a landmark to the target, a hidden loudspeaker in the field. During flight, the fly showed only small lateral deviations. The vertical flight direction angles were initially negative (directed downwards relative to starting position), grew positive (directed upwards) in the second half of the flight, and finally flattened (directed horizontally or slightly upwards), typically resulting in a landing above the loudspeaker. This phonotactic flight pattern was largely independent from sound pressure level or target distance, but depended on the elevation of the sound source. The flight velocity was partially influenced by sound pressure level and distance, but also by elevation. The more elevated the target, the lower was the speed. The accuracy of flight increased with elevation of the target as well as the landing precision. The minimal vertical angle difference eliciting differences in behaviour was 10°. By changing the elevation of the acoustic target after take-off, we showed that the fly is able to orientate acoustically while flying.

  20. Observations of shallow water marine ambient sound: the low frequency underwater soundscape of the central Oregon coast.

    PubMed

    Haxel, Joseph H; Dziak, Robert P; Matsumoto, Haru

    2013-05-01

    A year-long experiment (March 2010 to April 2011) measuring ambient sound at a shallow water site (50 m) on the central OR coast near the Port of Newport provides important baseline information for comparisons with future measurements associated with resource development along the inner continental shelf of the Pacific Northwest. Ambient levels in frequencies affected by surf-generated noise (f < 100 Hz) characterize the site as a high-energy end member within the spectrum of shallow water coastal areas influenced by breaking waves. Dominant sound sources include locally generated ship noise (66% of total hours contain local ship noise), breaking surf, wind induced wave breaking and baleen whale vocalizations. Additionally, an increase in spectral levels for frequencies ranging from 35 to 100 Hz is attributed to noise radiated from distant commercial ship commerce. One-second root mean square (rms) sound pressure level (SPLrms) estimates calculated across the 10-840 Hz frequency band for the entire year long deployment show minimum, mean, and maximum values of 84 dB, 101 dB, and 152 dB re 1 μPa.

  1. Intelligibility of speech in a virtual 3-D environment.

    PubMed

    MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J

    2002-01-01

    In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.

  2. The extended Fourier pseudospectral time-domain method for atmospheric sound propagation.

    PubMed

    Hornikx, Maarten; Waxler, Roger; Forssén, Jens

    2010-10-01

    An extended Fourier pseudospectral time-domain (PSTD) method is presented to model atmospheric sound propagation by solving the linearized Euler equations. In this method, evaluation of spatial derivatives is based on an eigenfunction expansion. Evaluation on a spatial grid requires only two spatial points per wavelength. Time iteration is done using a low-storage optimized six-stage Runge-Kutta method. This method is applied to two-dimensional non-moving media models, one with screens and one for an urban canyon, with generally high accuracy in both amplitude and phase. For a moving atmosphere, accurate results have been obtained in models with both a uniform and a logarithmic wind velocity profile over a rigid ground surface and in the presence of a screen. The method has also been validated for three-dimensional sound propagation over a screen. For that application, the developed method is in the order of 100 times faster than the second-order-accurate FDTD solution to the linearized Euler equations. The method is found to be well suited for atmospheric sound propagation simulations where effects of complex meteorology and straight rigid boundary surfaces are to be investigated.

  3. Sound transmission through lightweight double-leaf partitions: theoretical modelling

    NASA Astrophysics Data System (ADS)

    Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.

    2005-09-01

    This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.

  4. Simplified method to solve sound transmission through structures lined with elastic porous material.

    PubMed

    Lee, J H; Kim, J

    2001-11-01

    An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.

  5. Light-weight low-frequency loudspeaker

    NASA Astrophysics Data System (ADS)

    Corsaro, Robert; Tressler, James

    2002-05-01

    In an aerospace application, we require a very low-mass sound generator with good performance at low audio frequencies (i.e., 30-400 Hz). A number of device configurations have been explored using various actuation technologies. Two particularly interesting devices have been developed, both using ``Thunder'' transducers (Face Intl. Corp.) as the actuation component. One of these devices has the advantage of high sound output but a complex phase spectrum, while the other has somewhat lower output but a highly uniform phase. The former is particularly novel in that the actuator is coupled to a flat, compliant diaphragm supported on the edges by an inflatable tube. This results in a radiating surface with very high modal complexity. Sound pressure levels measured in the far field (25 cm) using only 200-V peak drive (one-third or its rating) were nominally 74 6 dB over the band from 38 to 330 Hz. The second device essentially operates as a stiff low-mass piston, and is more suitable for our particular application, which is exploring the use of active controlled surface covers for reducing sound levels in payload fairing regions. [Work supported by NRL/ONR Smart Blanket program.

  6. Vortex/Body Interaction and Sound Generation in Low-Speed Flow

    NASA Technical Reports Server (NTRS)

    Kao, Hsiao C.

    1998-01-01

    The problem of sound generation by vortices interacting with an arbitrary body in a low-speed flow has been investigated by the method of matched asymptotic expansions. For the purpose of this report, it is convenient to divide the problem into three parts. In the first part the mechanism of the vortex/body interaction, which is essentially the inner solution in the inner region, is examined. The trajectories for a system of vortices rotating about their centroid are found to undergo enormous changes after interaction; from this, some interesting properties emerged. In the second part, the problem is formulated, the outer solution is found, matching is implemented, and solutions for acoustic pressure are obtained. In the third part, Fourier integrals are evaluated and predicated results presented. An examination of these results reveals the following: (a) the background noise can be either augmented or attenuated by a body after interaction, (b) sound generated by vortex/body interaction obeys a scaling factor, (C) sound intensity can be reduced substantially by positioning the vortex system in the "favorable" side of the body instead of the "unfavorable" side, and (d) acoustic radiation from vortex/bluff-body interaction is less than that from vortex/airfoil interaction under most circumstances.

  7. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  8. Sound insulation and energy harvesting based on acoustic metamaterial plate

    NASA Astrophysics Data System (ADS)

    Assouar, Badreddine; Oudich, Mourad; Zhou, Xiaoming

    2015-03-01

    The emergence of artificially designed sub-wavelength acoustic materials, denoted acoustic metamaterials (AMM), has significantly broadened the range of materials responses found in nature. These engineered materials can indeed manipulate sound/vibration in surprising ways, which include vibration/sound insulation, focusing, cloaking, acoustic energy harvesting …. In this work, we report both on the analysis of the airborne sound transmission loss (STL) through a thin metamaterial plate and on the possibility of acoustic energy harvesting. We first provide a theoretical study of the airborne STL and confronted them to the structure-borne dispersion of a metamaterial plate. Second, we propose to investigate the acoustic energy harvesting capability of the plate-type AMM. We have developed semi-analytical and numerical methods to investigate the STL performances of a plate-type AMM with an airborne sound excitation having different incident angles. The AMM is made of silicone rubber stubs squarely arranged in a thin aluminum plate, and the STL is calculated at low-frequency range [100Hz to 3kHz] for an incoming incident sound pressure wave. The obtained analytical and numerical STL present a very good agreement confirming the reliability of developed approaches. A comparison between computed STL and the band structure of the considered AMM shows an excellent agreement and gives a physical understanding of the observed behavior. On another hand, the acoustic energy confinement in AMM with created defects with suitable geometry was investigated. The first results give a general view for assessing the acoustic energy harvesting performances making use of AMM.

  9. 25. View of South Section from caisson when drydock is ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. View of South Section from caisson when drydock is almost fully flooded in preparation for docking of a second submarine. Other views of the flooding procedure were not permitted for this HAER report. - Puget Sound Naval Shipyard, Drydock No. 3, Farragut Avenue, Bremerton, Kitsap County, WA

  10. The Handbook of Canadian Film. Second Edition.

    ERIC Educational Resources Information Center

    Beattie, Eleanor

    The core of this book consists of 131 short entries on individual Canadian filmmakers, arranged in alphabetical order, with filmographies and suggestions for further reading. The majority of the filmmakers who are described are directors; other members of the film community--producers, sound engineers, camera operators, and so on--are discussed in…

  11. Environmentally Sound Small-Scale Water Projects. Guidelines for Planning.

    ERIC Educational Resources Information Center

    Tillman, Gus

    This manual is the second volume in a series of publications on community development programs. Guidelines are suggested for small-scale water projects that would benefit segments of the world's urban or rural poor. Strategies in project planning, implementation and evaluation are presented that emphasize environmental conservation and promote…

  12. Morphophonemic Transfer in English Second Language Learners

    ERIC Educational Resources Information Center

    Ping, Sze Wei; Rickard Liow, Susan J.

    2011-01-01

    Malay (Rumi) is alphabetic and has a transparent, agglutinative system of affixation. We manipulated language-specific junctural phonetics in Malay and English to investigate whether morphophonemic L1-knowledge influences L2-processing. A morpheme decision task, "Does this "nonword" sound like a mono- or bi-morphemic English word?", was developed…

  13. Alternatives To Dissection. Second Edition.

    ERIC Educational Resources Information Center

    DeRosa, Bill, Ed.; Winiarskyj, Lesia, Ed.

    This packet attempts to provide educationally sound alternatives to dissection in the classroom, thereby making it possible for teachers to eliminate dissection from the curriculum. This packet can also be used by educators who include dissection in their curricula but consider it important to respect the expression of students' ethical, moral, or…

  14. 46 CFR 193.15-30 - Alarms.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... board while the vessel is being navigated which are protected by a carbon dioxide extinguishing system... when the carbon dioxide is admitted to the space. The alarm shall be conspicuously and centrally... as to sound during the 20-second delay period prior to the discharge of carbon dioxide into the space...

  15. Vertical Hegelianism and Beyond: Digital Cinema Editing.

    ERIC Educational Resources Information Center

    Wyatt, Roger B.

    Cinema as an art and communication form is entering its second century of development. Sergei Eisenstein conceived of editing in horizontal and vertical terms. He saw vertical editing patterns primarily as the synchronization of simultaneous image and sound elements, particularly music, no create cinematic meaning by means of the relationship…

  16. Evaluating the Effects of a "Student Buddy" Initiative on Student Engagement and Motivation

    ERIC Educational Resources Information Center

    Motzo, Anna

    2016-01-01

    Motivation is one of the most important factors which influences second language learning (Dörnyei, 1998; Gardner & Lambert, 1972). A support mechanism which reinforces student motivation through encouragement, social interaction, feedback, sound learning environments and good teaching is crucial for ensuring successful learning. This is…

  17. The Principal and Fiscal Management. Elementary Principal Series No. 6.

    ERIC Educational Resources Information Center

    Walters, James K.; Marconnit, George D.

    The sixth of six volumes in the "Elementary Principal Series," this booklet is designed to help principals develop sound fiscal management strategies at the building level. The first section reviews Indiana statutory provisions for handling extracurricular and booster group funds. The second section presents guidelines for managing…

  18. Combustion performance and heat transfer characterization of LOX/hydrocarbon type propellants

    NASA Technical Reports Server (NTRS)

    Gross, R. S.

    1980-01-01

    A sound data base was established by analytically and experimentally generating basic regenerative cooling, combustion performance, combustion stability, and combustion chamber heat transfer parameters for LOX/HC propellants, with specific application to second generation orbit maneuvering and reaction control systems (OMS/RCS) for the Space Shuttle Orbiter.

  19. Making Sense of Phonics: The Hows and Whys. Second Edition

    ERIC Educational Resources Information Center

    Beck, Isabel L.; Beck, Mark E.

    2013-01-01

    This bestselling book provides indispensable tools and strategies for explicit, systematic phonics instruction in K-3. Teachers learn effective ways to build students' decoding skills by teaching letter-sound relationships, blending, word building, multisyllabic decoding, fluency, and more. The volume is packed with engaging classroom activities,…

  20. Digital Stories: Overview

    ERIC Educational Resources Information Center

    Oskoz, Ana; Elola, Idoia

    2016-01-01

    This article provides an overview of how digital stories (DSs)--storylines that integrate text, images, and sound--have been used in second-language (L2) contexts. The article first reviews the methodical and planned, albeit non-linear, steps required for successful implementation of DSs in the L2 classroom and then assesses the observed…

  1. The Sound of Violets: The Ethnographic Potency of Poetry?

    ERIC Educational Resources Information Center

    Phipps, Alison; Saunders, Lesley

    2009-01-01

    This paper takes the form of a dialogue between the two authors, and is in two halves, the first half discursive and propositional, and the second half exemplifying the rhetorical, epistemological and metaphysical affordances of poetry in critically scrutinising the rhetoric, epistemology and metaphysics of educational management discourse. The…

  2. Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)

    1995-01-01

    An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.

  3. Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)

    1994-01-01

    An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.

  4. Measurement of attenuation coefficients of the fundamental and second harmonic waves in water

    NASA Astrophysics Data System (ADS)

    Zhang, Shuzeng; Jeong, Hyunjo; Cho, Sungjong; Li, Xiongbing

    2016-02-01

    Attenuation corrections in nonlinear acoustics play an important role in the study of nonlinear fluids, biomedical imaging, or solid material characterization. The measurement of attenuation coefficients in a nonlinear regime is not easy because they depend on the source pressure and requires accurate diffraction corrections. In this work, the attenuation coefficients of the fundamental and second harmonic waves which come from the absorption of water are measured in nonlinear ultrasonic experiments. Based on the quasilinear theory of the KZK equation, the nonlinear sound field equations are derived and the diffraction correction terms are extracted. The measured sound pressure amplitudes are adjusted first for diffraction corrections in order to reduce the impact on the measurement of attenuation coefficients from diffractions. The attenuation coefficients of the fundamental and second harmonics are calculated precisely from a nonlinear least squares curve-fitting process of the experiment data. The results show that attenuation coefficients in a nonlinear condition depend on both frequency and source pressure, which are much different from a linear regime. In a relatively lower drive pressure, the attenuation coefficients increase linearly with frequency. However, they present the characteristic of nonlinear growth in a high drive pressure. As the diffraction corrections are obtained based on the quasilinear theory, it is important to use an appropriate source pressure for accurate attenuation measurements.

  5. Water quality and bed sediment quality in the Albemarle Sound, North Carolina, 2012–14

    USGS Publications Warehouse

    Moorman, Michelle C.; Fitzgerald, Sharon A.; Gurley, Laura N.; Rhoni-Aref, Ahmed; Loftin, Keith A.

    2017-01-23

    The Albemarle Sound region was selected in 2012 as one of two demonstration sites in the Nation to test and improve the design of the National Water Quality Monitoring Council’s National Monitoring Network (NMN) for U.S. Coastal Waters and Tributaries. The goal of the NMN for U.S. Coastal Waters and Tributaries is to provide information about the health of our oceans, coastal ecosystems, and inland influences on coastal waters for improved resource management. The NMN is an integrated, multidisciplinary, and multi-organizational program using multiple sources of data and information to augment current monitoring programs.This report presents and summarizes selected water-quality and bed sediment-quality data collected as part of the demonstration project conducted in two phases. The first phase was an occurrence and distribution study to assess nutrients, metals, pesticides, cyanotoxins, and phytoplankton communities in the Albemarle Sound during the summer of 2012 at 34 sites in Albemarle Sound, nearby sounds, and various tributaries. The second phase consisted of monthly sampling over a year (March 2013 through February 2014) to assess seasonality in a more limited set of constituents including nutrients, cyanotoxins, and phytoplankton communities at a subset (eight) of the sites sampled in the first phase. During the summer of 2012, few constituent concentrations exceeded published water-quality thresholds; however, elevated levels of chlorophyll a and pH were observed in the northern embayments and in Currituck Sound. Chlorophyll a, and metals (copper, iron, and zinc) were detected above a water-quality threshold. The World Health Organization provisional guideline based on cyanobacterial density for high recreational risk was exceeded in approximately 50 percent of water samples collected during the summer of 2012. Cyanobacteria capable of producing toxins were present, but only low levels of cyanotoxins below human health benchmarks were detected. Finally, 12 metals in surficial bed sediments were detected at levels above a published sediment-quality threshold. These metals included chromium, mercury, copper, lead, arsenic, nickel, and cadmium. Sites with several metal concentrations above the respective thresholds had relatively high concentrations of organic carbon or fine sediment (silt plus clay), or both and were predominantly located in the western and northwestern parts of the Albemarle Sound.Results from the second phase were generally similar to those of the first in that relatively few constituents exceeded a water-quality threshold, both pH and chlorophyll a were detected above the respective water-quality thresholds, and many of these elevated concentrations occurred in the northern embayments and in Currituck Sound. In contrast to the results from phase one, the cyanotoxin, microcystin was detected at more than 10 times the water-quality threshold during a phytoplankton bloom on the Chowan River at Mount Gould, North Carolina in August of 2013. This was the only cyanotoxin concentration measured during the entire study that exceeded a respective water-quality threshold.The information presented in this report can be used to improve understanding of water-quality conditions in the Albemarle Sound, particularly when evaluating causal and response variables that are indicators of eutrophication. In particular, this information can be used by State agencies to help develop water-quality criteria for nutrients, and to understand factors like cyanotoxins that may affect fisheries and recreation in the Albemarle Sound region.

  6. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution. Any existing modelling technique can be included into our framework of mesh decoupling and adaptive sampling to accelerate large-scale 3-D EM inversions.

  7. Respiratory Sound Analysis for Flow Estimation During Wakefulness and Sleep, and its Applications for Sleep Apnea Detection and Monitoring

    NASA Astrophysics Data System (ADS)

    Yadollahi, Azadeh

    Tracheal respiratory sounds analysis has been investigated as a non--invasive method to estimate respiratory flow and upper airway obstruction. However, the flow--sound relationship is highly variable among subjects which makes it challenging to estimate flow in general applications. Therefore, a robust model for acoustical flow estimation in a large group of individuals did not exist before. On the other hand, a major application of acoustical flow estimation is to detect flow limitations in patients with obstructive sleep apnea (OSA) during sleep. However, previously the flow--sound relationship was only investigated during wakefulness among healthy individuals. Therefore, it was necessary to examine the flow--sound relationship during sleep in OSA patients. This thesis takes the above challenges and offers innovative solutions. First, a modified linear flow--sound model was proposed to estimate respiratory flow from tracheal sounds. To remove the individual based calibration process, the statistical correlation between the model parameters and anthropometric features of 93 healthy volunteers was investigated. The results show that gender, height and smoking are the most significant factors that affect the model parameters. Hence, a general acoustical flow estimation model was proposed for people with similar height and gender. Second, flow--sound relationship during sleep and wakefulness was studied among 13 OSA patients. The results show that during sleep and wakefulness, flow--sound relation- ship follows a power law, but with different parameters. Therefore, for acoustical flow estimation during sleep, the model parameters should be extracted from sleep data to have small errors. The results confirm reliability of the acoustical flow estimation for investigating flow variations during both sleep and wakefulness. Finally, a new method for sleep apnea detection and monitoring was developed, which only requires recording the tracheal sounds and the blood's oxygen saturation level (SaO2) data. It automatically classifies the sound segments into breath, snore and noise. A weighted average of features extracted from sound segments and SaO2 signal was used to detect apnea and hypopnea events. The performance of the proposed approach was evaluated on the data of 66 patients. The results show high correlation (0.96, p < 0.0001) between the outcomes of our system and those of the polysomnography. Also, sensitivity and specificity of the proposed method in differentiating simple snorers from OSA patients were found to be more than 91%. These results are superior or comparable with the existing commercialized sleep apnea portable monitors.

  8. Device and method for generating a beam of acoustic energy from a borehole, and applications thereof

    DOEpatents

    Vu, Cung Khac; Sinha, Dipen N.; Pantea, Cristian; Nihei, Kurt T.; Schmitt, Denis P.; Skelt, Chirstopher

    2013-10-15

    In some aspects of the invention, a method of generating a beam of acoustic energy in a borehole is disclosed. The method includes generating a first acoustic wave at a first frequency; generating a second acoustic wave at a second frequency different than the first frequency, wherein the first acoustic wave and second acoustic wave are generated by at least one transducer carried by a tool located within the borehole; transmitting the first and the second acoustic waves into an acoustically non-linear medium, wherein the composition of the non-linear medium produces a collimated beam by a non-linear mixing of the first and second acoustic waves, wherein the collimated beam has a frequency based upon a difference between the first frequency range and the second frequency, and wherein the non-linear medium has a velocity of sound between 100 m/s and 800 m/s.

  9. Comparing headphone and speaker effects on simulated driving.

    PubMed

    Nelson, T M; Nilsson, T H

    1990-12-01

    Twelve persons drove for three hours in an automobile simulator while listening to music at sound level 63dB over stereo headphones during one session and from a dashboard speaker during another session. They were required to steer a mountain highway, maintain a certain indicated speed, shift gears, and respond to occasional hazards. Steering and speed control were dependent on visual cues. The need to shift and the hazards were indicated by sound and vibration effects. With the headphones, the driver's average reaction time for the most complex task presented--shifting gears--was about one-third second longer than with the speaker. The use of headphones did not delay the development of subjective fatigue.

  10. Sputtered SiO2 as low acoustic impedance material for Bragg mirror fabrication in BAW resonators.

    PubMed

    Olivares, Jimena; Wegmann, Enrique; Capilla, José; Iborra, Enrique; Clement, Marta; Vergara, Lucía; Aigner, Robert

    2010-01-01

    In this paper we describe the procedure to sputter low acoustic impedance SiO(2) films to be used as a low acoustic impedance layer in Bragg mirrors for BAW resonators. The composition and structure of the material are assessed through infrared absorption spectroscopy. The acoustic properties of the films (mass density and sound velocity) are assessed through X-ray reflectometry and picosecond acoustic spectroscopy. A second measurement of the sound velocity is achieved through the analysis of the longitudinal lambda/2 resonance that appears in these silicon oxide films when used as uppermost layer of an acoustic reflector placed under an AlN-based resonator.

  11. A program of high resolution X-ray astronomy using sounding rockets

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Two Aerobee 170 sounding rocket payloads were flown at the White Sands Missile Range: (1) a focusing X-ray collector on 31 March 1972; and (2) a high resolution telescope on 4 August 1972. Data has been reduced from each of these flights. In the first flight both the rocket and the experiment instrumentation performed adequately, and it is clear that at least the minimum scientific objectives were attained. In the second flight the attitude control system failed to point the telescope at the target for a sufficient length of time. However examination of final preflight checkout data and some flight data indicate that the instrumentation for this rocket payload was functioning according to expectations.

  12. RAWINPROC: Computer program for decommutating, interpreting, and interpolating Rawinsonde meteorological balloon sounding data

    NASA Technical Reports Server (NTRS)

    Staffanson, F. L.

    1981-01-01

    The FORTRAN computer program RAWINPROC accepts output from NASA Wallops computer program METPASS1; and produces input for NASA computer program 3.0.0700 (ECC-PRD). The three parts together form a software system for the completely automatic reduction of standard RAWINSONDE sounding data. RAWINPROC pre-edits the 0.1-second data, including time-of-day, azimuth, elevation, and sonde-modulated tone frequency, condenses the data according to successive dwells of the tone frequency, decommutates the condensed data into the proper channels (temperature, relative humidity, high and low references), determines the running baroswitch contact number and computes the associated pressure altitudes, and interpolates the data appropriate for input to ACC-PRD.

  13. Resonant behaviour of MHD waves on magnetic flux tubes. I - Connection formulae at the resonant surfaces. II - Absorption of sound waves by sunspots

    NASA Technical Reports Server (NTRS)

    Sakurai, Takashi; Goossens, Marcel; Hollweg, Joseph V.

    1991-01-01

    The present method of addressing the resonance problems that emerge in such MHD phenomena as the resonant absorption of waves at the Alfven resonance point avoids solving the fourth-order differential equation of dissipative MHD by recourse to connection formulae across the dissipation layer. In the second part of this investigation, the absorption of solar 5-min oscillations by sunspots is interpreted as the resonant absorption of sounds by a magnetic cylinder. The absorption coefficient is interpreted (1) analytically, under certain simplifying assumptions, and numerically, under more general conditions. The observed absorption coefficient magnitude is explained over suitable parameter ranges.

  14. Simple reaction time to the onset of time-varying sounds.

    PubMed

    Schlittenlacher, Josef; Ellermeier, Wolfgang

    2015-10-01

    Although auditory simple reaction time (RT) is usually defined as the time elapsing between the onset of a stimulus and a recorded reaction, a sound cannot be specified by a single point in time. Therefore, the present work investigates how the period of time immediately after onset affects RT. By varying the stimulus duration between 10 and 500 msec, this critical duration was determined to fall between 32 and 40 milliseconds for a 1-kHz pure tone at 70 dB SPL. In a second experiment, the role of the buildup was further investigated by varying the rise time and its shape. The increment in RT for extending the rise time by a factor of ten was about 7 to 8 msec. There was no statistically significant difference in RT between a Gaussian and linear rise shape. A third experiment varied the modulation frequency and point of onset of amplitude-modulated tones, producing onsets at different initial levels with differently rapid increase or decrease immediately afterwards. The results of all three experiments results were explained very well by a straightforward extension of the parallel grains model (Miller and Ulrich Cogn. Psychol. 46, 101-151, 2003), a probabilistic race model employing many parallel channels. The extension of the model to time-varying sounds made the activation of such a grain depend on intensity as a function of time rather than a constant level. A second approach by mechanisms known from loudness produced less accurate predictions.

  15. A study of the prediction of cruise noise and laminar flow control noise criteria for subsonic air transports

    NASA Technical Reports Server (NTRS)

    Swift, G.; Mungur, P.

    1979-01-01

    General procedures for the prediction of component noise levels incident upon airframe surfaces during cruise are developed. Contributing noise sources are those associated with the propulsion system, the airframe and the laminar flow control (LFC) system. Transformation procedures from the best prediction base of each noise source to the transonic cruise condition are established. Two approaches to LFC/acoustic criteria are developed. The first is a semi-empirical extension of the X-21 LFC/acoustic criteria to include sensitivity to the spectrum and directionality of the sound field. In the second, the more fundamental problem of how sound excites boundary layer disturbances is analyzed by deriving and solving an inhomogeneous Orr-Sommerfeld equation in which the source terms are proportional to the production and dissipation of sound induced fluctuating vorticity. Numerical solutions are obtained and compared with corresponding measurements. Recommendations are made to improve and validate both the cruise noise prediction methods and the LFC/acoustic criteria.

  16. Numerical simulation of the processes in the normal incidence tube for high acoustic pressure levels

    NASA Astrophysics Data System (ADS)

    Fedotov, E. S.; Khramtsov, I. V.; Kustov, O. Yu.

    2016-10-01

    Numerical simulation of the acoustic processes in an impedance tube at high levels of acoustic pressure is a way to solve a problem of noise suppressing by liners. These studies used liner specimen that is one cylindrical Helmholtz resonator. The evaluation of the real and imaginary parts of the liner acoustic impedance and sound absorption coefficient was performed for sound pressure levels of 130, 140 and 150 dB. The numerical simulation used experimental data having been obtained on the impedance tube with normal incidence waves. At the first stage of the numerical simulation it was used the linearized Navier-Stokes equations, which describe well the imaginary part of the liner impedance whatever the sound pressure level. These equations were solved by finite element method in COMSOL Multiphysics program in axisymmetric formulation. At the second stage, the complete Navier-Stokes equations were solved by direct numerical simulation in ANSYS CFX in axisymmetric formulation. As the result, the acceptable agreement between numerical simulation and experiment was obtained.

  17. Aperture size, materiality of the secondary room, and listener location: Impact on the simulated impulse response of a coupled-volume concert hall

    NASA Astrophysics Data System (ADS)

    Ermann, Michael; Johnson, Marty E.; Harrison, Byron W.

    2002-11-01

    By adding a second room to a concert hall, and designing doors to control the sonic transparency between the two rooms, designers can create a new, coupled acoustic. Concert halls use coupling to achieve a variable, longer, and distinct reverberant quality for their musicians and listeners. For this study, a coupled-volume concert hall based on an existing performing arts center is conceived and computer modeled. It has a fixed geometric volume, form, and primary-room sound absorption. Ray-tracing software simulates impulse responses, varying both aperture size and secondary-room sound-absorption level, across a grid of receiver (listener) locations. The results are compared with statistical analysis that suggests a highly sensitive relationship between the double-sloped condition and the architecture of the space. This line of study aims to quantitatively and spatially correlate the double-sloped condition with (1) aperture size exposing the chamber, (2) sound absorptance in the coupled volume, and (3) listener location.

  18. Aperture size, materiality of the secondary room and listener location: Impact on the simulated impulse response of a coupled-volume concert hall

    NASA Astrophysics Data System (ADS)

    Ermann, Michael; Johnson, Marty E.; Harrison, Byron W.

    2003-04-01

    By adding a second room to a concert hall, and designing doors to control the sonic transparency between the two rooms, designers can create a new, coupled acoustic. Concert halls use coupling to achieve a variable, longer and distinct reverberant quality for their musicians and listeners. For this study, a coupled-volume concert hall based on an existing performing arts center is conceived and computer-modeled. It has a fixed geometric volume, form and primary-room sound absorption. Ray-tracing software simulates impulse responses, varying both aperture size and secondary-room sound absorption level, across a grid of receiver (listener) locations. The results are compared with statistical analysis that suggests a highly sensitive relationship between the double-sloped condition and the architecture of the space. This line of study aims to quantitatively and spatially correlate the double-sloped condition with (1) aperture size exposing the chamber, (2) sound absorptance in the coupled volume, and (3) listener location.

  19. A High-Order Immersed Boundary Method for Acoustic Wave Scattering and Low-Mach Number Flow-Induced Sound in Complex Geometries

    PubMed Central

    Seo, Jung Hee; Mittal, Rajat

    2010-01-01

    A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129

  20. Visual Feedback of Tongue Movement for Novel Speech Sound Learning

    PubMed Central

    Katz, William F.; Mehta, Sonya

    2015-01-01

    Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing. PMID:26635571

  1. Deriving a dosage-response relationship for community response to high-energy impulsive noise

    NASA Technical Reports Server (NTRS)

    Fidell, Sanford; Pearsons, Karl S.

    1994-01-01

    The inability to systematically predict community response to exposure to sonic booms (and other high energy impulsive sounds) is a major impediment to credible analyses of the environmental effects of supersonic flight operations. Efforts to assess community response to high energy impulsive sounds are limited in at least two important ways. First, a paucity of appropriate empirical data makes it difficult to infer a dosage-response relationship by means similar to those used in the case of general transportation noise. Second, it is unclear how well the 'equal energy hypothesis' (the notion that duration, number, and level of individual events are directly interchangeable determinants of annoyance) applies to some forms of impulsive noise exposure. Some of the issues currently under consideration by a CHABA working group addressing these problems are discussed. These include means for applying information gained in controlled exposure studies about different rates of growth of annoyance with impulsive and non-impulsive sound exposure levels, and strategies for developing a dosage-response relationship in a data-poor area.

  2. Sound waves and flexural mode dynamics in two-dimensional crystals

    NASA Astrophysics Data System (ADS)

    Michel, K. H.; Scuracchio, P.; Peeters, F. M.

    2017-09-01

    Starting from a Hamiltonian with anharmonic coupling between in-plane acoustic displacements and out-of-plane (flexural) modes, we derived coupled equations of motion for in-plane displacements correlations and flexural mode density fluctuations. Linear response theory and time-dependent thermal Green's functions techniques are applied in order to obtain different response functions. As external perturbations we allow for stresses and thermal heat sources. The displacement correlations are described by a Dyson equation where the flexural density distribution enters as an additional perturbation. The flexural density distribution satisfies a kinetic equation where the in-plane lattice displacements act as a perturbation. In the hydrodynamic limit this system of coupled equations is at the basis of a unified description of elastic and thermal phenomena, such as isothermal versus adiabatic sound motion and thermal conductivity versus second sound. The general theory is formulated in view of application to graphene, two-dimensional h-BN, and 2H-transition metal dichalcogenides and oxides.

  3. Analysis of sound absorption performance of an electroacoustic absorber using a vented enclosure

    NASA Astrophysics Data System (ADS)

    Cho, Youngeun; Wang, Semyung; Hyun, Jaeyub; Oh, Seungjae; Goo, Seongyeol

    2018-03-01

    The sound absorption performance of an electroacoustic absorber (EA) is primarily influenced by the dynamic characteristics of the loudspeaker that acts as the actuator of the EA system. Therefore, the sound absorption performance of the EA is maximum at the resonance frequency of the loudspeaker and tends to degrade in the low-frequency and high-frequency bands based on this resonance frequency. In this study, to adjust the sound absorption performance of the EA system in the low-frequency band of approximately 20-80 Hz, an EA system using a vented enclosure that has previously been used to enhance the radiating sound pressure of a loudspeaker in the low-frequency band, is proposed. To verify the usefulness of the proposed system, two acoustic environments are considered. In the first acoustic environment, the vent of the vented enclosure is connected to an external sound field that is distinct from the sound field coupled to the EA. In this case, the acoustic effect of the vented enclosure on the performance of the EA is analyzed through an analytical approach using dynamic equations and an impedance-based equivalent circuit. Then, it is verified through numerical and experimental approaches. Next, in the second acoustic environment, the vent is connected to the same external sound field as the EA. In this case, the effect of the vented enclosure on the EA is investigated through an analytical approach and finally verified through a numerical approach. As a result, it is confirmed that the characteristics of the sound absorption performances of the proposed EA system using the vented enclosure in the two acoustic environments considered in this study are different from each other in the low-frequency band of approximately 20-80 Hz. Furthermore, several case studies on the change tendency of the performance of the EA using the vented enclosure according to the critical design factors or vent number for the vented enclosure are also investigated. In the future, even if the proposed EA system using a vented enclosure is extended to a large number of arrays required for 3D sound field control, it is expected to be an attractive solution that can contribute to an improvement in low-frequency noise reduction without causing economic and system complexity problems.

  4. Accent, Identity, and a Fear of Loss? ESL Students' Perspectives

    ERIC Educational Resources Information Center

    McCrocklin, Shannon; Link, Stephanie

    2016-01-01

    Because many theorists propose a connection between accent and identity, some theorists have justifiably been concerned about the ethical ramifications of L2 pronunciation teaching. However, English-as-a-second-language (ESL) students often state a desire to sound like native speakers. With little research into ESL students' perceptions of links…

  5. Film: An Introduction.

    ERIC Educational Resources Information Center

    Fell, John L.

    "Understanding Film," the opening section of this book, discusses perceptions of and responses to film and the way in which experiences with and knowledge of other media affect film viewing. The second section, "Film Elements," analyzes the basic elements of film: the use of space and time, the impact of editing, sound and color, and the effects…

  6. Responsive Teaching from the Inside Out: Teaching Base Ten to Young Children

    ERIC Educational Resources Information Center

    Empson, Susan B.

    2014-01-01

    Decision making during instruction that is responsive to children's mathematical thinking is examined reflexively by the researcher in the context of teaching second graders. Focus is on exploring how the research base on learning informs teaching decisions that are oriented to building on children's sound conceptions. The development of four…

  7. Achieving Universal Primary Education by 2015: A Chance for Every Child.

    ERIC Educational Resources Information Center

    Bruns, Barbara; Mingat, Alain; Rakotomalala, Ramahatra

    Achievement of the second of the Millennium Development Goals (MDG)--universal primary education by 2015--is crucial, as education is one of the most powerful instruments for reducing poverty and inequality and for laying the foundation for sustained economic growth, effective institutions, and sound governance. This study assesses whether…

  8. Sounds of Science

    ERIC Educational Resources Information Center

    Lott, Kimberly; Lott, Alan; Ence, Hannah

    2018-01-01

    Inquiry-based active learning in science is helpful to all students but especially to those who have a hearing loss. For many deaf or hard of hearing students, the English language may be their second language, with American Sign Language (ASL) being their primary language. Therefore, many of the accommodations for the deaf are similar to those…

  9. Perceptual Judgments of Accented Speech by Listeners from Different First Language Backgrounds

    ERIC Educational Resources Information Center

    Kang, Okim; Vo, Son Ca Thanh; Moran, Meghan Kerry

    2016-01-01

    Research in second language speech has often focused on listeners' accent judgment and factors that affect their perception. However, the topic of listeners' application of specific sound categories in their own perceptual judgments has not been widely investigated. The current study explored how listeners from diverse language backgrounds weighed…

  10. An Alternative Approach to Identifying a Dimension in Second Language Proficiency.

    ERIC Educational Resources Information Center

    Griffin, Patrick E.; And Others

    Current practice in language testing has not yet integrated classical test theory with assessment of language skills. In addition, language testing needs to be part of theory development. Lack of sound testing procedures can lead to problems in research design and ultimately, inappropriate theory development. The debate over dimensionality of…

  11. Phonological Awareness in Mandarin of Chinese and Americans

    ERIC Educational Resources Information Center

    Hu, Min

    2009-01-01

    Phonological awareness (PA) is the ability to analyze spoken language into its component sounds and to manipulate these smaller units. Literature review related to PA shows that a variety of factor groups play a role in PA in Mandarin such as linguistic experience (spoken language, alphabetic literacy, and second language learning), item type,…

  12. The Impact of Orthographic Consistency on German Spoken Word Identification

    ERIC Educational Resources Information Center

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    An auditory lexical decision experiment was conducted to find out whether sound-to-spelling consistency has an impact on German spoken word processing, and whether such an impact is different at different stages of reading development. Four groups of readers (school children in the second, third and fifth grades, and university students)…

  13. Gifts of the Spirit: Multiple Intelligences in Religious Education. Second Edition.

    ERIC Educational Resources Information Center

    Nuzzi, Ronald

    This booklet provides practical direction for religious educators that they might effectively teach heterogeneous groups of learners by employing a broad range of teaching/learning approaches while keeping in the forefront the importance of basing practice on sound theory. The booklet begins with a clear explication of the essential attributes of…

  14. The Poetry Cafe Is Open! Teaching Literary Devices of Sound in Poetry Writing

    ERIC Educational Resources Information Center

    Kovalcik, Beth; Certo, Janine L.

    2007-01-01

    A six-week long intervention that introduced second graders to poetry writing is described in this article, ending in a classroom "poetry cafe" culminating event. This article details the established classroom "writing workshop" structure and environment and the perceptions and observations of how students responded to the instruction. Four poetry…

  15. Effects of Lips and Hands on Auditory Learning of Second-Language Speech Sounds

    ERIC Educational Resources Information Center

    Hirata, Yukari; Kelly, Spencer D.

    2010-01-01

    Purpose: Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the…

  16. DISTINCTIVE FEATURES IN THE PLURALIZATION RULES OF ENGLISH SPEAKERS.

    ERIC Educational Resources Information Center

    ANISFELD, MOSHE; AND OTHERS

    FIRST AND SECOND GRADERS, GIVEN "CVC" SINGULAR NONSENSE WORDS (E.G., NAR) ORALLY AND ASKED TO CHOOSE BETWEEN TWO PLURALS (NARF-NARK), PREFERRED FINAL SOUNDS SHARING WITH /Z/ (THE MOST COMMON SHAPE OF THE PLURAL MORPHEME IN ENGLISH) THE STRIDENCY OR CONTINUANCE FEATURES. THIS SUGGESTS THAT THEIR PLURALIZATION RULES ARE FORMULATED IN TERMS OF…

  17. Breech Babies: What Can I Do If My Baby Is Breech?

    MedlinePlus

    ... uterus. One option is to rest in the child’s pose for 10 to 15 minutes. A second option is to gently rock back and forth on your hands and knees. You also can make circles with your pelvis to promote activity. Music: Certain sounds may appeal to your baby. Place ...

  18. "Pour nos petits Manitobains," Exposure Package for Grades K-1 Conversational French Program.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg. Bureau of French Education.

    This guide outlines the Manitoba Department of Education's conversational French-as-a-second-language curriculum for kindergarten and first grade. The program is designed to introduce young children to the French language and culture through the learning of French sounds, vocabulary, and some sentence patterns. An introductory section explains the…

  19. DECODAGE DE LA CHAINE PARLEE ET APPRENTISSAGE DES LANGUES (SPEECH DECODING AND LANGUAGE LEARNING).

    ERIC Educational Resources Information Center

    COMPANYS, EMMANUEL

    THIS PAPER WRITTEN IN FRENCH, PRESENTS A HYPOTHESIS CONCERNING THE DECODING OF SPEECH IN SECOND LANGUAGE LEARNING. THE THEORETICAL BACKGROUND OF THE DISCUSSION CONSISTS OF WIDELY ACCEPTED LINGUISTIC CONCEPTS SUCH AS THE PHONEME, DISTINCTIVE FEATURES, NEUTRALIZATION, LINGUISTIC LEVELS, FORM AND SUBSTANCE, EXPRESSION AND CONTENT, SOUNDS, PHONEMES,…

  20. Speech Discrimination in 11-Month-Old Bilingual and Monolingual Infants: A Magnetoencephalography Study

    ERIC Educational Resources Information Center

    Ferjan Ramírez, Naja; Ramírez, Rey R.; Clarke, Maggie; Taulu, Samu; Kuhl, Patricia K.

    2017-01-01

    Language experience shapes infants' abilities to process speech sounds, with universal phonetic discrimination abilities narrowing in the second half of the first year. Brain measures reveal a corresponding change in neural discrimination as the infant brain becomes selectively sensitive to its native language(s). Whether and how bilingual…

  1. 76 FR 45515 - Second Notice of Intent To Prepare an Environmental Impact Statement Related to Two Joint State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-29

    ... Resource Management Plans for Puget Sound Hatchery Programs and Reopening of Comment Period AGENCY... prepare an Environmental Impact Statement (EIS) for two hatchery Resource Management Plans and appended Hatchery and Genetic Management Plans (HGMPs) jointly proposed by the Washington Department of Fish and...

  2. 14 CFR Appendix A to Part 150 - Noise Exposure Maps

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... series of n events in time period T, in seconds. Note: When T is one hour, LT is referred to as one-hour... sound attenuation into the design and construction of a structure may be necessary to achieve..., noise exposure maps prepared in connection with studies which were either Federally funded or Federally...

  3. 14 CFR Appendix A to Part 150 - Noise Exposure Maps

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... series of n events in time period T, in seconds. Note: When T is one hour, LT is referred to as one-hour... sound attenuation into the design and construction of a structure may be necessary to achieve..., noise exposure maps prepared in connection with studies which were either Federally funded or Federally...

  4. 14 CFR Appendix A to Part 150 - Noise Exposure Maps

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... series of n events in time period T, in seconds. Note: When T is one hour, LT is referred to as one-hour... sound attenuation into the design and construction of a structure may be necessary to achieve..., noise exposure maps prepared in connection with studies which were either Federally funded or Federally...

  5. 14 CFR Appendix A to Part 150 - Noise Exposure Maps

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... series of n events in time period T, in seconds. Note: When T is one hour, LT is referred to as one-hour... sound attenuation into the design and construction of a structure may be necessary to achieve..., noise exposure maps prepared in connection with studies which were either Federally funded or Federally...

  6. 14 CFR Appendix A to Part 150 - Noise Exposure Maps

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... series of n events in time period T, in seconds. Note: When T is one hour, LT is referred to as one-hour... sound attenuation into the design and construction of a structure may be necessary to achieve..., noise exposure maps prepared in connection with studies which were either Federally funded or Federally...

  7. 46 CFR 95.15-30 - Alarms.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... navigated, other than paint and lamp lockers and similar small spaces, shall be fitted with an approved... only for systems required to be fitted with a delayed discharge. Such alarms shall be so arranged as to sound during the 20 second delay period prior to the discharge of carbon dioxide into the space, and the...

  8. 46 CFR 76.15-30 - Alarms.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... than paint and lamp lockers and similar small spaces, shall be fitted with an approved audible alarm in... required to be fitted with a delayed discharge. Such alarms shall be so arranged as to sound during the 20 second delay period prior to the discharge of carbon dioxide into the space, and the alarm shall depend...

  9. The Sound of Mute Vowels in Auditory Word-Stem Completion

    ERIC Educational Resources Information Center

    Beland, Renee; Prunet, Jean-Francois; Peretz, Isabelle

    2009-01-01

    Some studies have argued that orthography can influence speakers when they perform oral language tasks. Words containing a mute vowel provide well-suited stimuli to investigate this phenomenon because mute vowels, such as the second "e" in "vegetable", are present orthographically but absent phonetically. Using an auditory word-stem completion…

  10. Sequential bilateral cochlear implantation improves working performance, quality of life, and quality of hearing.

    PubMed

    Härkönen, Kati; Kivekäs, Ilkka; Rautiainen, Markus; Kotti, Voitto; Sivonen, Ville; Vasama, Juha-Pekka

    2015-05-01

    This prospective study shows that working performance, quality of life (QoL), and quality of hearing (QoH) are better with two compared with a single cochlear implant (CI). The impact of the second CI on the patient's QoL is as significant as the impact of the first CI. To evaluate the benefits of sequential bilateral cochlear implantation in working, QoL, and QoH. We studied working performance, work-related stress, QoL, and QoH with specific questionnaires in 15 patients with unilateral CI scheduled for sequential CI of another ear. Sound localization performance and speech perception in noise were measured with specific tests. All questionnaires and tests were performed before the second CI surgery and 6 and 12 months after its activation. Bilateral CIs increased patients' working performance and their work-related stress and fatigue decreased. Communication with co-workers was easier and patients were more active in their working environment. Sequential bilateral cochlear implantation improved QoL, QoH, sound localization, and speech perception in noise statistically significantly.

  11. Automated segmentation of linear time-frequency representations of marine-mammal sounds.

    PubMed

    Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I

    2013-09-01

    Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.

  12. Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall

    PubMed Central

    Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat

    2014-01-01

    Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process. PMID:25436085

  13. Hearing visuo-tactile synchrony - Sound-induced proprioceptive drift in the invisible hand illusion.

    PubMed

    Darnai, Gergely; Szolcsányi, Tibor; Hegedüs, Gábor; Kincses, Péter; Kállai, János; Kovács, Márton; Simon, Eszter; Nagy, Zsófia; Janszky, József

    2017-02-01

    The rubber hand illusion (RHI) and its variant the invisible hand illusion (IHI) are useful for investigating multisensory aspects of bodily self-consciousness. Here, we explored whether auditory conditioning during an RHI could enhance the trisensory visuo-tactile-proprioceptive interaction underlying the IHI. Our paradigm comprised of an IHI session that was followed by an RHI session and another IHI session. The IHI sessions had two parts presented in counterbalanced order. One part was conducted in silence, whereas the other part was conducted on the backdrop of metronome beats that occurred in synchrony with the brush movements used for the induction of the illusion. In a first experiment, the RHI session also involved metronome beats and was aimed at creating an associative memory between the brush stroking of a rubber hand and the sounds. An analysis of IHI sessions showed that the participants' perceived hand position drifted more towards the body-midline in the metronome relative to the silent condition without any sound-related session differences. Thus, the sounds, but not the auditory RHI conditioning, influenced the IHI. In a second experiment, the RHI session was conducted without metronome beats. This confirmed the conditioning-independent presence of sound-induced proprioceptive drift in the IHI. Together, these findings show that the influence of visuo-tactile integration on proprioceptive updating is modifiable by irrelevant auditory cues merely through the temporal correspondence between the visuo-tactile and auditory events. © 2016 The British Psychological Society.

  14. Postnatal development of echolocation abilities in a bottlenose dolphin (Tursiops truncatus): temporal organization.

    PubMed

    Favaro, Livio; Gnone, Guido; Pessani, Daniela

    2013-03-01

    In spite of all the information available on adult bottlenose dolphin (Tursiops truncatus) biosonar, the ontogeny of its echolocation abilities has been investigated very little. Earlier studies have reported that neonatal dolphins can produce both whistles and burst-pulsed sounds just after birth and that early-pulsed sounds are probably a precursor of echolocation click trains. The aim of this research is to investigate the development of echolocation signals in a captive calf, born in the facilities of the Acquario di Genova. A set of 81 impulsive sounds were collected from birth to the seventh postnatal week and six additional echolocation click trains were recorded when the dolphin was 1 year old. Moreover, behavioral observations, concurring with sound production, were carried out by means of a video camera. For each sound we measured five acoustic parameters: click train duration (CTD), number of clicks per train, minimum, maximum, and mean click repetition rate (CRR). CTD and number of clicks per train were found to increase with age. Maximum and mean CRR followed a decreasing trend with dolphin growth starting from the second postnatal week. The calf's first head scanning movement was recorded 21 days after birth. Our data suggest that in the bottlenose dolphin the early postnatal weeks are essential for the development of echolocation abilities and that the temporal features of the echolocation click trains remain relatively stable from the seventh postnatal week up to the first year of life. © 2013 Wiley Periodicals, Inc.

  15. Formation of artificial plasma disturbances in the lower ionosphere

    NASA Astrophysics Data System (ADS)

    Bakhmet'eva, N. V.; Frolov, V. L.; Vyakhirev, V. D.; Kalinina, E. E.; Bolotin, I. A.; Akchurin, A. D.; Zykov, E. Yu.

    2012-06-01

    We present the results of experiments on sounding the disturbed ionospheric region produced by the high-power RF radiation of the "Sura" heating facility, which were performed simultaneously at two observation points. One point is located on the territory of the heating facility the other, and the other, at the observatory of Kazan State University (the "Observatory" point) in 170 km to the East from the facility. The experiments were aimed at studying the mechanism of formation of artificial disturbances in the lower ionosphere in the case of reflection of a high-power wave in the F region and determining the parameters of the signals of backscattering from artificial electron density irregularities which are formed as a result of ionospheric perturbations. The ionosphere was modified by a high-power RF O-mode wave, which was emitted by the transmitters of the "Sura" facility, in sessions several seconds or minutes long. The disturbed region was sounded using the vertical-sounding technique at the "Vasil'sursk" laboratory by the partial-reflection facility at a frequency of 2.95 MHz, and by the modified ionospheric station "Tsiklon" at ten frequencies ranged from 2 to 6.5 MHz at the "Observatory" point. At the same time, vertical-sounding ionograms were recorded in the usual regime. At the reception points, simultaneous changes in the amplitudes of the vertical-sounding signals and the aspect backscattering signals were recorded. These records correlate with the periods of operation of the heating facility. The characteristics and dynamics of the signals are discussed.

  16. Least-squares Legendre spectral element solutions to sound propagation problems.

    PubMed

    Lin, W H

    2001-02-01

    This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes.

  17. Learning about the dynamic Sun through sounds

    NASA Astrophysics Data System (ADS)

    Peticolas, L. M.; Quinn, M.; MacCallum, J.; Luhmann, J.

    2007-12-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We will present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the dynamic eruptions of mass from the outermost atmosphere of the Sun, the Corona. These eruptions are called coronal mass ejections (CMEs). One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it in the software to make music. We will demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We will discuss a "walk across the Sun" created for this exhibit so people can hear the features on solar images. For example, we will show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We will also share our successes and lessons learned. These two projects stem from the STEREO-IMPACT (In-situ Measurements of Particles and CME Transients) E/PO program and a grant from the IDEAS (The Initiative to Develop Education through Astronomy and Space Science (IDEAS) Grant Program.

  18. An investigation into the effect of playback environment on perception of sonic booms when heard indoors

    NASA Astrophysics Data System (ADS)

    Carr, Daniel; Davies, Patricia

    2015-10-01

    Aircraft manufacturers are interested in designing and building a new generation of supersonic aircraft that produce shaped sonic booms of lower peak amplitude than booms created by current supersonic aircraft. To determine if the noise exposure from these "low"booms is more acceptable to communities, new laboratory testing to evaluate people's responses must occur. To guide supersonic aircraft design, objective measures that predict human response to modified sonic boom waveforms and other impulsive sounds are needed. The present research phase is focused on understanding people's reactions to booms when heard inside, and therefore includes consideration of the effects of house type and the indoor acoustic environment. A test was conducted in NASA Langley's Interior Effects Room (IER), with the collaboration of NASA Langley engineers. This test was focused on the effects of low-frequency content and of vibration, and subjects sat in a small living room environment. A second test was conducted in a sound booth at Purdue University, using similar sounds played back over earphones. The sounds in this test contained less very-low-frequency energy due to limitations in the playback, and the laboratory setting is a less natural environment. For the purpose of comparison, and to improve the robustness of the model, both sonic booms and other more familiar transient sounds were used in the tests. The design of the tests and the signals are briefly described, and the results of both tests will be presented.

  19. The softest sound levels of the human voice in normal subjects.

    PubMed

    Šrámková, Hana; Granqvist, Svante; Herbst, Christian T; Švec, Jan G

    2015-01-01

    Accurate measurement of the softest sound levels of phonation presents technical and methodological challenges. This study aimed at (1) reliably obtaining normative data on sustained softest sound levels for the vowel [a:] at comfortable pitch; (2) comparing the results for different frequency and time weighting methods; and (3) refining the Union of European Phoniatricians' recommendation on allowed background noise levels for scientific and equipment manufacturers' purposes. Eighty healthy untrained participants (40 females, 40 males) were investigated in quiet rooms using a head-mounted microphone and a sound level meter at 30 cm distance. The one-second-equivalent sound levels were more stable and more representative for evaluating the softest sustained phonations than the fast-time-weighted levels. At 30 cm, these levels were in the range of 48-61 dB(C)/41-53 dB(A) for females and 49 - 64 dB(C)/35-53 dB(A) for males (5% to 95% quantile range). These ranges may serve as reference data in evaluating vocal normality. In order to reach a signal-to-noise ratio of at least 10 dB for more than 95% of the normal population, the background noise should be below 25 dB(A) and 38 dB(C), respectively, for the softest phonation measurements at 30 cm distance. For the A-weighting, this is 15 dB lower than the previously recommended value.

  20. Electromagnetic soundings to detect groundwater contamination produced by intensive livestock farming

    NASA Astrophysics Data System (ADS)

    Sainato, C. M.; Losinno, B. N.; Márquez Molina, J. J.; Espada, R. A.

    2018-07-01

    Feedlots, a set of corrals where livestock is gathered to be fattened for market, are widely spreading in Buenos Aires Province, Argentina. However, the impact of manure as a consequence of this activity on soil organic matter mineralisation and groundwater is still to be explored. Although previous studies have described contamination in sandy soil environments, there is still little evidence on the effect of leachates in soils with a finer texture. The objective of this work was to assess contamination at a pen and its surroundings, by means of the modelling of electromagnetic induction (EMI) soundings carried out annually during two years of feedlot activity. A multifrequency conductivity meter was used for frequencies from 2 kHz to 16 kHz. For the 1D inversion of experimental data, the quadrature component of the secondary H-field normalized by the primary field expressed in ppm was used. The models of each measurement site were joined and 2D sections were obtained along transects in the pen and its surroundings. Groundwater chemical analysis was also performed annually during four years of feedlot activity. With soil depth, model resistivity decreased, reaching values between 6 and 8 Ω m at the unsaturated and the saturated zone. This decline indicated that the leachates from animal manure had increased soil salinity. In the second year of soundings, the layers below the pen showed an important decrease of resistivity. On the other hand, variation of the concentration of nitrates, chlorides and sulfates remained the same both in the phreatic and in the deep well along the four years of groundwater analysis. The concentration of sulfates and nitrates showed a maximum value in the second and in the third year after the beginning of the animal confinement activity in the pen. The following year, with the increase of precipitations, these concentrations decreased. Thus, the modelling of electromagnetic soundings proved to be a useful tool to determine the effect of leachate contamination in feedlot pens.

  1. Characterization of Cardiac Time Intervals in Healthy Bonnet Macaques (Macaca radiata) by Using an Electronic Stethoscope

    PubMed Central

    Kamran, Haroon; Salciccioli, Louis; Pushilin, Sergei; Kumar, Paraag; Carter, John; Kuo, John; Novotney, Carol; Lazar, Jason M

    2011-01-01

    Nonhuman primates are used frequently in cardiovascular research. Cardiac time intervals derived by phonocardiography have long been used to assess left ventricular function. Electronic stethoscopes are simple low-cost systems that display heart sound signals. We assessed the use of an electronic stethoscope to measure cardiac time intervals in 48 healthy bonnet macaques (age, 8 ± 5 y) based on recorded heart sounds. Technically adequate recordings were obtained from all animals and required 1.5 ± 1.3 min. The following cardiac time intervals were determined by simultaneously recording acoustic and single-lead electrocardiographic data: electromechanical activation time (QS1), electromechanical systole (QS2), the time interval between the first and second heart sounds (S1S2), and the time interval between the second and first sounds (S2S1). QS2 was correlated with heart rate, mean arterial pressure, diastolic blood pressure, and left ventricular ejection time determined by using echocardiography. S1S2 correlated with heart rate, mean arterial pressure, diastolic blood pressure, left ventricular ejection time, and age. S2S1 correlated with heart rate, mean arterial pressure, diastolic blood pressure, systolic blood pressure, and left ventricular ejection time. QS1 did not correlate with any anthropometric or echocardiographic parameter. The relation S1S2/S2S1 correlated with systolic blood pressure. On multivariate analyses, heart rate was the only independent predictor of QS2, S1S2, and S2S1. In conclusion, determination of cardiac time intervals is feasible and reproducible by using an electrical stethoscope in nonhuman primates. Heart rate is a major determinant of QS2, S1S2, and S2S1 but not QS1; regression equations for reference values for cardiac time intervals in bonnet macaques are provided. PMID:21439218

  2. Investigation on flow oscillation modes and aero-acoustics generation mechanism in cavity

    NASA Astrophysics Data System (ADS)

    Yang, Dang-Guo; Lu, Bo; Cai, Jin-Sheng; Wu, Jun-Qiang; Qu, Kun; Liu, Jun

    2018-05-01

    Unsteady flow and multi-scale vortex transformation inside a cavity of L/D = 6 (ratio of length to depth) at Ma = 0.9 and 1.5 were studied using the numerical simulation method of modified delayed detached eddy simulation (DDES) in this paper. Aero-acoustic characteristics for the cavity at same flow conditions were obtained by the numerical method and 0.6 m by 0.6 m transonic and supersonic wind-tunnel experiments. The analysis on the computational and experimental results indicates that some vortex generates from flow separation in shear-layer over the cavity, and the vortex moves from forward to downward of the cavity at some velocity, and impingement of the vortex and the rear-wall of the cavity occurs. Some sound waves spread abroad to the cavity fore-wall, which induces some new vortex generation, and the vortex sheds, moves and impinges on the cavity rear-wall. New sound waves occur. The research results indicate that sound wave feedback created by the impingement of the shedding-vortices and rear cavity face leads to flow oscillations and noise generation inside the cavity. Analysis on aero-acoustic characteristics inside the cavity is feasible. The simulated self-sustained flow-oscillation modes and peak sound pressure on typical frequencies inside the cavity agree well with Rossiter’s and Heller’s predicated results. Moreover, the peak sound pressure occurs in the first and second flow-oscillation modes and most of sound energy focuses on the low-frequency region. Compared with subsonic speed (Ma = 0.9), aerodynamic noise is more intense at Ma = 1.5, which is induced by compression wave or shock wave in near region of fore and rear cavity face.

  3. Sounding Rocket Launches Successfully from Alaska

    NASA Image and Video Library

    2015-01-28

    Caption: Time lapse photo of the NASA Oriole IV sounding rocket with Aural Spatial Structures Probe as an aurora dances over Alaska. All four stages of the rocket are visible in this image. Credit: NASA/Jamie Adkins More info: On count day number 15, the Aural Spatial Structures Probe, or ASSP, was successfully launched on a NASA Oriole IV sounding rocket at 5:41 a.m. EST on Jan. 28, 2015, from the Poker Flat Research Range in Alaska. Preliminary data show that all aspects of the payload worked as designed and the principal investigator Charles Swenson at Utah State University described the mission as a “raging success.” “This is likely the most complicated mission the sounding rocket program has ever undertaken and it was not easy by any stretch," said John Hickman, operations manager of the NASA sounding rocket program office at the Wallops Flight Facility, Virginia. "It was technically challenging every step of the way.” “The payload deployed all six sub-payloads in formation as planned and all appeared to function as planned. Quite an amazing feat to maneuver and align the main payload, maintain the proper attitude while deploying all six 7.3-pound sub payloads at about 40 meters per second," said Hickman. Read more: www.nasa.gov/content/assp-sounding-rocket-launches-succes... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  4. Sounding Rocket Launches Successfully from Alaska

    NASA Image and Video Library

    2015-01-28

    A NASA Oriole IV sounding rocket with the Aural Spatial Structures Probe leaves the launch pad on Jan. 28, 2015, from the Poker Flat Research Range in Alaska. Credit: NASA/Lee Wingfield More info: On count day number 15, the Aural Spatial Structures Probe, or ASSP, was successfully launched on a NASA Oriole IV sounding rocket at 5:41 a.m. EST on Jan. 28, 2015, from the Poker Flat Research Range in Alaska. Preliminary data show that all aspects of the payload worked as designed and the principal investigator Charles Swenson at Utah State University described the mission as a “raging success.” “This is likely the most complicated mission the sounding rocket program has ever undertaken and it was not easy by any stretch," said John Hickman, operations manager of the NASA sounding rocket program office at the Wallops Flight Facility, Virginia. "It was technically challenging every step of the way.” “The payload deployed all six sub-payloads in formation as planned and all appeared to function as planned. Quite an amazing feat to maneuver and align the main payload, maintain the proper attitude while deploying all six 7.3-pound sub payloads at about 40 meters per second," said Hickman. Read more: www.nasa.gov/content/assp-sounding-rocket-launches-succes... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Experimental Investigation of Propagation and Reflection Phenomena in Finite Amplitude Sound Beams.

    NASA Astrophysics Data System (ADS)

    Averkiou, Michalakis Andrea

    Measurements of finite amplitude sound beams are compared with theoretical predictions based on the KZK equation. Attention is devoted to harmonic generation and shock formation related to a variety of propagation and reflection phenomena. Both focused and unfocused piston sources were used in the experiments. The nominal source parameters are piston radii of 6-25 mm, frequencies of 1-5 MHz, and focal lengths of 10-20 cm. The research may be divided into two parts: propagation and reflection of continuous-wave focused sound beams, and propagation of pulsed sound beams. In the first part, measurements of propagation curves and beam patterns of focused pistons in water, both in the free field and following reflection from curved targets, are presented. The measurements are compared with predictions from a computer model that solves the KZK equation in the frequency domain. A novel method for using focused beams to measure target curvature is developed. In the second part, measurements of pulsed sound beams from plane pistons in both water and glycerin are presented. Very short pulses (less than 2 cycles), tone bursts (5-30 cycles), and frequency modulated (FM) pulses (10-30 cycles) were measured. Acoustic saturation of pulse propagation in water is investigated. Self-demodulation of tone bursts and FM pulses was measured in glycerin, both in the near and far fields, on and off axis. All pulse measurements are compared with numerical results from a computer code that solves the KZK equation in the time domain. A quasilinear analytical solution for the entire axial field of a self-demodulating pulse is derived in the limit of strong absorption. Taken as a whole, the measurements provide a broad data base for sound beams of finite amplitude. Overall, outstanding agreement is obtained between theory and experiment.

  6. Precision of working memory for speech sounds.

    PubMed

    Joseph, Sabine; Iverson, Paul; Manohar, Sanjay; Fox, Zoe; Scott, Sophie K; Husain, Masud

    2015-01-01

    Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such "quantized" views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.

  7. Novel underwater soundscape: acoustic repertoire of plainfin midshipman fish.

    PubMed

    McIver, Eileen L; Marchaterre, Margaret A; Rice, Aaron N; Bass, Andrew H

    2014-07-01

    Toadfishes are among the best-known groups of sound-producing (vocal) fishes and include species commonly known as toadfish and midshipman. Although midshipman have been the subject of extensive investigation of the neural mechanisms of vocalization, this is the first comprehensive, quantitative analysis of the spectro-temporal characters of their acoustic signals and one of the few for fishes in general. Field recordings of territorial, nest-guarding male midshipman during the breeding season identified a diverse vocal repertoire composed of three basic sound types that varied widely in duration, harmonic structure and degree of amplitude modulation (AM): 'hum', 'grunt' and 'growl'. Hum duration varied nearly 1000-fold, lasting for minutes at a time, with stable harmonic stacks and little envelope modulation throughout the sound. By contrast, grunts were brief, ~30-140 ms, broadband signals produced both in isolation and repetitively as a train of up to 200 at intervals of ~0.5-1.0 s. Growls were also produced alone or repetitively, but at variable intervals of the order of seconds with durations between those of grunts and hums, ranging 60-fold from ~200 ms to 12 s. Growls exhibited prominent harmonics with sudden shifts in pulse repetition rate and highly variable AM patterns, unlike the nearly constant AM of grunt trains and flat envelope of hums. Behavioral and neurophysiological studies support the hypothesis that each sound type's unique acoustic signature contributes to signal recognition mechanisms. Nocturnal production of these sounds against a background chorus dominated constantly for hours by a single sound type, the multi-harmonic hum, reveals a novel underwater soundscape for fish. © 2014. Published by The Company of Biologists Ltd.

  8. Developing the STS sound pollution unit for enhancing students' applying knowledge among science technology engineering and mathematics

    NASA Astrophysics Data System (ADS)

    Jumpatong, Sutthaya; Yuenyong, Chokchai

    2018-01-01

    STEM education suggested that students should be enhanced to learn science with integration between Science, Technology, Engineering and Mathematics. To help Thai students make sense of relationship between Science, Technology, Engineering and Mathematics, this paper presents learning activities of STS Sound Pollution. The developing of STS Sound Pollution is a part of research that aimed to enhance students' perception of the relationship between Science Technology Engineering and Mathematics. This paper will discuss how to develop Sound Pollution through STS approach in framework of Yuenyong (2006) where learning activities were provided based on 5 stages. These included (1) identification of social issues, (2) identification of potential solutions, (3) need for knowledge, (4) decisionmaking, and (5) socialization stage. The learning activities could be highlighted as following. First stage, we use video clip of `Problem of people about Sound Pollution'. Second stage, students will need to identification of potential solutions by design Home/Factory without noisy. The need of scientific and other knowledge will be proposed for various alternative solutions. Third stage, students will gain their scientific knowledge through laboratory and demonstration of sound wave. Fourth stage, students have to make decision for the best solution of designing safety Home/Factory based on their scientific knowledge and others (e.g. mathematics, economics, art, value, and so on). Finally, students will present and share their Design Safety Home/Factory in society (e.g. social media or exhibition) in order to validate their ideas and redesigning. The paper, then, will discuss how those activities would allow students' applying knowledge of science technology engineering, mathematics and others (art, culture and value) for their possible solution of the STS issues.

  9. FOXSI-2: Upgrades of the Focusing Optics X-ray Solar Imager for its Second Flight

    NASA Astrophysics Data System (ADS)

    Christe, Steven; Glesener, Lindsay; Buitrago-Casas, Camilo; Ishikawa, Shin-Nosuke; Ramsey, Brian; Gubarev, Mikhail; Kilaru, Kiranmayee; Kolodziejczak, Jeffery J.; Watanabe, Shin; Takahashi, Tadayuki; Tajima, Hiroyasu; Turin, Paul; Shourt, Van; Foster, Natalie; Krucker, Sam

    2016-03-01

    The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload flew for the second time on 2014 December 11. To enable direct Hard X-Ray (HXR) imaging spectroscopy, FOXSI makes use of grazing-incidence replicated focusing optics combined with fine-pitch solid-state detectors. FOXSI’s first flight provided the first HXR focused images of the Sun. For FOXSI’s second flight several updates were made to the instrument including updating the optics and detectors as well as adding a new Solar Aspect and Alignment System (SAAS). This paper provides an overview of these updates as well as a discussion of their measured performance.

  10. Acoustic wave propagation in high-pressure system.

    PubMed

    Foldyna, Josef; Sitek, Libor; Habán, Vladimír

    2006-12-22

    Recently, substantial attention is paid to the development of methods of generation of pulsations in high-pressure systems to produce pulsating high-speed water jets. The reason is that the introduction of pulsations into the water jets enables to increase their cutting efficiency due to the fact that the impact pressure (so-called water-hammer pressure) generated by an impact of slug of water on the target material is considerably higher than the stagnation pressure generated by corresponding continuous jet. Special method of pulsating jet generation was developed and tested extensively under the laboratory conditions at the Institute of Geonics in Ostrava. The method is based on the action of acoustic transducer on the pressure liquid and transmission of generated acoustic waves via pressure system to the nozzle. The purpose of the paper is to present results obtained during the research oriented at the determination of acoustic wave propagation in high-pressure system. The final objective of the research is to solve the problem of transmission of acoustic waves through high-pressure water to generate pulsating jet effectively even at larger distances from the acoustic source. In order to be able to simulate numerically acoustic wave propagation in the system, it is necessary among others to determine dependence of the sound speed and second kinematical viscosity on operating pressure. Method of determination of the second kinematical viscosity and speed of sound in liquid using modal analysis of response of the tube filled with liquid to the impact was developed. The response was measured by pressure sensors placed at both ends of the tube. Results obtained and presented in the paper indicate good agreement between experimental data and values of speed of sound calculated from so-called "UNESCO equation". They also show that the value of the second kinematical viscosity of water depends on the pressure.

  11. The effect of wing flexibility on sound generation of flapping wings.

    PubMed

    Geng, Biao; Xue, Qian; Zheng, Xudong; Liu, Geng; Ren, Yan; Dong, Haibo

    2017-12-13

    In this study, the unsteady flow and acoustic characteristics of a three-dimensional (3D) flapping wing model of a Tibicen linnei cicada in forward-flight are numerically investigated. A single cicada wing is modelled as a membrane with a prescribed motion reconstructed from high-speed videos of a live insect. The numerical solution takes a hydrodynamic/acoustic splitting approach: the flow field is solved with an incompressible Navier-Stokes flow solver based on an immersed boundary method, and the acoustic field is solved with linearized perturbed compressible equations. The 3D simulation allows for the examination of both the directivity and frequency compositions of the flapping wing sound in a full space. Along with the flexible wing model, a rigid wing model that is extracted from real motion is also simulated to investigate the effects of wing flexibility. The simulation results show that the flapping sound is directional; the dominant frequency varies around the wing. The first and second frequency harmonics show different radiation patterns in the rigid and flexible wing cases, which are demonstrated to be highly associated with wing kinematics and loadings. Furthermore, the rotation and deformation in the flexible wing is found to help lower the sound strength in all directions.

  12. The effect of 10% carbamide peroxide bleaching material on microhardness of sound and demineralized enamel and dentin in situ.

    PubMed

    Basting, R T; Rodrigues Júnior, A L; Serra, M C

    2001-01-01

    This in situ study evaluated the microhardness of sound and demineralized enamel and dentin submitted to treatment with 10% carbamide peroxide for three weeks. A 10% carbamide peroxide bleaching agent--Opalescence/Ultradent (OPA)--was evaluated against a placebo agent (PLA). Two hundred and forty dental fragments--60 sound enamel fragments (SE), 60 demineralized enamel fragments (DE), 60 sound dentin fragments (SD) and 60 demineralized dentin fragments (DD)--were randomly fixed on the vestibular surface of the first superior molars and second superior premolars of 30 volunteers. The volunteers were divided into two groups that received bleaching or the placebo agent at different sequences and periods at a double blind 2 x 2 crossover study with a wash-out period of two weeks. Microhardness tests were performed on the enamel and dentin surface. The SE and DE submitted to treatment with OPA showed lower microhardness values than the SE and DE submitted to treatment with PLA. There were no statistical differences in microhardness values for SD and DD submitted to the treatment with OPA and PLA. The results suggest that treatment with 10% carbamide peroxide bleaching material for three weeks alters the enamel microhardness, although it does not seem to alter the dentin microhardness.

  13. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    PubMed

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  14. Enhanced auditory spatial localization in blind echolocators.

    PubMed

    Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A

    2015-01-01

    Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. An FPGA-Based Rapid Wheezing Detection System

    PubMed Central

    Lin, Bor-Shing; Yen, Tian-Shiue

    2014-01-01

    Wheezing is often treated as a crucial indicator in the diagnosis of obstructive pulmonary diseases. A rapid wheezing detection system may help physicians to monitor patients over the long-term. In this study, a portable wheezing detection system based on a field-programmable gate array (FPGA) is proposed. This system accelerates wheezing detection, and can be used as either a single-process system, or as an integrated part of another biomedical signal detection system. The system segments sound signals into 2-second units. A short-time Fourier transform was used to determine the relationship between the time and frequency components of wheezing sound data. A spectrogram was processed using 2D bilateral filtering, edge detection, multithreshold image segmentation, morphological image processing, and image labeling, to extract wheezing features according to computerized respiratory sound analysis (CORSA) standards. These features were then used to train the support vector machine (SVM) and build the classification models. The trained model was used to analyze sound data to detect wheezing. The system runs on a Xilinx Virtex-6 FPGA ML605 platform. The experimental results revealed that the system offered excellent wheezing recognition performance (0.912). The detection process can be used with a clock frequency of 51.97 MHz, and is able to perform rapid wheezing classification. PMID:24481034

  16. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  17. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals

    PubMed Central

    Peng, Rong-Chao; Yan, Wen-Rong; Zhang, Ning-Ling; Lin, Wan-Hua; Zhou, Xiao-Lin; Zhang, Yuan-Ting

    2015-01-01

    Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services. PMID:26393591

  18. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals.

    PubMed

    Peng, Rong-Chao; Yan, Wen-Rong; Zhang, Ning-Ling; Lin, Wan-Hua; Zhou, Xiao-Lin; Zhang, Yuan-Ting

    2015-09-17

    Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services.

  19. Evaluation of hearing protection used by police officers in the shooting range.

    PubMed

    Guida, Heraldo Lorena; Taxini, Carla Linhares; Gonçalves, Claudia Giglio de Oliveira; Valenti, Vitor Engrácia

    2014-01-01

    Impact noise is characterized by acoustic energy peaks that last less than a second, at intervals of more than 1s. To quantify the levels of impact noise to which police officers are exposed during activities at the shooting range and to evaluate the attenuation of the hearing protector. Measurements were performed in the shooting range of a military police department. An SV 102 audiodosimeter (Svantek) was used to measure sound pressure levels. Two microphones were used simultaneously: one external and one insertion type; the firearm used was a 0.40 Taurus® rimless pistol. The values obtained with the external microphone were 146 dBC (peak), and a maximum sound level of 129.4 dBC (fast). The results obtained with the insertion microphone were 138.7 dBC (peak), and a maximum sound level of 121.6 dBC (fast). The findings showed high levels of sound pressure in the shooting range, which exceeded the maximum recommended noise (120 dBC), even when measured through the insertion microphone. Therefore, alternatives to improve the performance of hearing protection should be considered. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. Application of a Musical Whistling Certificate Examination System as a Group Examination

    NASA Astrophysics Data System (ADS)

    Mori, Mikio; Ogihara, Mitsuhiro; Sugahara, Shin-Ichi; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro

    Recently, some professional whistlers have set up music schools to teach musical whistling. However, so far, there is no licensed examination for musical whistling. In this paper, we propose an examination system for evaluating musical whistling. The system conducts an examination in musical whistling on a personal computer (PC). It can be used to award four grades, from the second to the fifth. These grades are designed according to the standards adopted by the school for musical whistling established by the Japanese professional whistler Moku-San. It is expected that the group examination of this examination is held in the examination center where other general certification examinations are held. Thus, the influence of the whistle sound on the PC microphone normally used should be considered. For this purpose, we examined the feasibility of using a bone-conductive microphone for a musical whistling certificate examination system. This paper shows that the proposed system in which bone-transmitted sounds are considered gives good performance under a noisy environment, as demonstrated in a group examination of musical whistling using bone-transmitted sounds. The timing of a candidates whistling tends to not match because the applause sound output from the PC was inaudible for a person older than 60 years.

  1. Blind separation of incoherent and spatially disjoint sound sources

    NASA Astrophysics Data System (ADS)

    Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter

    2016-11-01

    Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.

  2. Large Eddy Simulation of Sound Generation by Turbulent Reacting and Nonreacting Shear Flows

    NASA Astrophysics Data System (ADS)

    Najafi-Yazdi, Alireza

    The objective of the present study was to investigate the mechanisms of sound generation by subsonic jets. Large eddy simulations were performed along with bandpass filtering of the flow and sound in order to gain further insight into the pole of coherent structures in subsonic jet noise generation. A sixth-order compact scheme was used for spatial discretization of the fully compressible Navier-Stokes equations. Time integration was performed through the use of the standard fourth-order, explicit Runge-Kutta scheme. An implicit low dispersion, low dissipation Runge-Kutta (ILDDRK) method was developed and implemented for simulations involving sources of stiffness such as flows near solid boundaries, or combustion. A surface integral acoustic analogy formulation, called Formulation 1C, was developed for farfield sound pressure calculations. Formulation 1C was derived based on the convective wave equation in order to take into account the presence of a mean flow. The formulation was derived to be easy to implement as a numerical post-processing tool for CFD codes. Sound radiation from an unheated, Mach 0.9 jet at Reynolds number 400, 000 was considered. The effect of mesh size on the accuracy of the nearfield flow and farfield sound results was studied. It was observed that insufficient grid resolution in the shear layer results in unphysical laminar vortex pairing, and increased sound pressure levels in the farfield. Careful examination of the bandpass filtered pressure field suggested that there are two mechanisms of sound radiation in unheated subsonic jets that can occur in all scales of turbulence. The first mechanism is the stretching and the distortion of coherent vortical structures, especially close to the termination of the potential core. As eddies are bent or stretched, a portion of their kinetic energy is radiated. This mechanism is quadrupolar in nature, and is responsible for strong sound radiation at aft angles. The second sound generation mechanism appears to be associated with the transverse vibration of the shear-layer interface within the ambient quiescent flow, and has dipolar characteristics. This mechanism is believed to be responsible for sound radiation along the sideline directions. Jet noise suppression through the use of microjets was studied. The microjet injection induced secondary instabilities in the shear layer which triggered the transition to turbulence, and suppressed laminar vortex pairing. This in turn resulted in a reduction of OASPL at almost all observer locations. In all cases, the bandpass filtering of the nearfield flow and the associated sound provides revealing details of the sound radiation process. The results suggest that circumferential modes are significant and need to be included in future wavepacket models for jet noise prediction. Numerical simulations of sound radiation from nonpremixed flames were also performed. The simulations featured the solution of the fully compressible Navier-Stokes equations. Therefore, sound generation and radiation were directly captured in the simulations. A thickened flamelet model was proposed for nonpremixed flames. The model yields artificially thickened flames which can be better resolved on the computational grid, while retaining the physically currect values of the total heat released into the flow. Combustion noise has monopolar characteristics for low frequencies. For high frequencies, the sound field is no longer omni-directional. Major sources of sound appear to be located in the jet shear layer within one potential core length from the jet nozzle.

  3. Vertical resolving power of a satellite temperature sounding system

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1979-01-01

    The paper examines the vertical resolving power of satellite temperature retrieval systems. Attention is given to sounding instrument proposed by Kaplan, et al., (1977) which has been conceived to have greatly improved vertical resolving capabilities. Two types of tests are reported. The first, based on the work of Conrath (1972), involves a theoretical assessment of the manner by which the ambient temperature profile is averaged over height in order to produce an estimate of temperature at a given level. The second test is empirical involving the actual retrieval of temperature signals superimposed on a standard atmosphere with an emphasis on determining the minimum separation of the signals for which the sounder system is still capable of distinguishing individual signals.

  4. Intermittent large amplitude internal waves observed in Port Susan, Puget Sound

    NASA Astrophysics Data System (ADS)

    Harris, J. C.; Decker, L.

    2017-07-01

    A previously unreported internal tidal bore, which evolves into solitary internal wave packets, was observed in Port Susan, Puget Sound, and the timing, speed, and amplitude of the waves were measured by CTD and visual observation. Acoustic Doppler current profiler (ADCP) measurements were attempted, but unsuccessful. The waves appear to be generated with the ebb flow along the tidal flats of the Stillaguamish River, and the speed and width of the resulting waves can be predicted from second-order KdV theory. Their eventual dissipation may contribute significantly to surface mixing locally, particularly in comparison with the local dissipation due to the tides. Visually the waves appear in fair weather as a strong foam front, which is less visible the farther they propagate.

  5. The lung exam.

    PubMed

    Loudon, R G

    1987-06-01

    Accurate diagnosis is essential for effective treatment. After history-taking, the physical examination is second in importance in assessing a pulmonary patient. The time-honored sequence of inspection, palpation, percussion, and auscultation is appropriate. Diagnostic tests are becoming more complex, more expensive, and more inclined to separate the patient and physician. The stethoscope is still the more commonly used diagnostic medical instrument, but it is not always used to best advantage. It is familiar, harmless, portable, and inexpensive. Its appropriate use improves medical practice and reduces costs. Improvements in sound recording and analysis techniques have spurred a renewed interest in lung sounds and their meaning. This is likely to lead to better understanding of what we hear, and perhaps to the development of new noninvasive diagnostic and monitoring techniques.

  6. Peripheral and central auditory specialization in a gliding marsupial, the feathertail glider, Acrobates pygmaeus.

    PubMed

    Aitkin, L M; Nelson, J E

    1989-01-01

    Two specialized features are described in the auditory system of Acrobates pygmaeus, a small gliding marsupial. Firstly, the ear canal includes a transverse disk of bone that partly occludes the canal near the eardrum. The resultant narrow-necked chamber above the eardrum appears to attenuate sound across a broad frequency range, except at 27-29 kHz at which a net gain of sound pressure occurs. Secondly, the lateral medulla is hypertrophied at the level of the cochlear nucleus, forming a massive lateral lobe comprised of multipolar cells and granule cells. This lobe has connections with the auditory nerve and the cerebellum. Speculations are advanced about the functions of these structures in gliding behaviour and predator avoidance.

  7. Comet Kohoutek - Ultraviolet images and spectrograms

    NASA Technical Reports Server (NTRS)

    Opal, C. B.; Carruthers, G. R.; Prinz, D. K.; Meier, R. R.

    1974-01-01

    Emissions of atomic oxygen (1304 A), atomic carbon (1657 A), and atomic hydrogen (1216 A) from Comet Kohoutek were observed with ultraviolet cameras carried on a sounding rocket on Jan. 8, 1974. Analysis of the Lyman alpha halo at 1216 A gave an atomic hydrogen production rate of 4.5 x 10 to the 29th atoms per second.

  8. Language, Literacy, Children's Literature: The Link to Communicative Competency for ESOL Adults.

    ERIC Educational Resources Information Center

    Flickinger, Gayle Glidden

    Developing literacy in adults who speak English as a second language (ESL) means more than rote memorization of letters and sounds, because literacy implies a familiarity with the language and culture sufficient for comfortable interaction and communication of ideas to others. Literacy can be defined as (1) a matter of language, (2) having many…

  9. The Influence of Gujarati and Tamil L1s on Indian English: A Preliminary Study

    ERIC Educational Resources Information Center

    Wiltshire, Caroline R.; Harnsberger, James D.

    2006-01-01

    English as spoken as a second language in India has developed distinct sound patterns in terms of both segmental and prosodic characteristics. We investigate the differences between two groups varying in native language (Gujarati, Tamil) to evaluate to what extent Indian English (IE) accents are based on a single target phonological-phonetic…

  10. The Western Civilization Videodisc (Second Edition), CD-ROM, and Master Guide [Multimedia.

    ERIC Educational Resources Information Center

    1996

    This resource represents a virtual library of still and moving images, documents, maps, sound clips and text which make up the history of Western Civilization from prehistoric times to the early 1990s. The interdisciplinary range of materials included is compatible with standard textbooks in middle and high school social science, social studies,…

  11. 2014-2015 Puget Sound Regional Travel Study | Transportation Secure Data

    Science.gov Websites

    a college-population travel survey in fall 2014. In spring 2015, a second household data collection sample of longitudinal data from households that completed the 2014 survey. Survey Records Survey records include a total of 794 participants. More Information Learn more about the PSRC Household Travel Survey

  12. Word Recognition Error Analysis: Comparing Isolated Word List and Oral Passage Reading

    ERIC Educational Resources Information Center

    Flynn, Lindsay J.; Hosp, John L.; Hosp, Michelle K.; Robbins, Kelly P.

    2011-01-01

    The purpose of this study was to determine the relation between word recognition errors made at a letter-sound pattern level on a word list and on a curriculum-based measurement oral reading fluency measure (CBM-ORF) for typical and struggling elementary readers. The participants were second, third, and fourth grade typical and struggling readers…

  13. Post-Copenhagen: the 'new' math, legal 'additionality' and climate warming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrey, Steven

    Control of carbon emissions to the atmosphere is the environmental issue of this decade - perhaps of this entire generation. Its importance has been equated to the survival of the planet. It may all come down to the novel legal concept that sounds like second-grade math: ''additionality.'' However, it is the ''new'' math. (author)

  14. Developing a Multimedia Package for University Teaching and Learning--Lessons Learnt

    ERIC Educational Resources Information Center

    Maheshwari, B.

    2011-01-01

    A team of staff at the University of Western Sydney (UWS) were involved in developing a multimedia package, called Sustainable Water Use in Agriculture (SWAG), to assist the first and second year students to learn about the use, management and conservation of water in agriculture. A range of media techniques including text, sound, diagrams,…

  15. Nonsymbolic, Approximate Arithmetic in Children: Abstract Addition Prior to Instruction

    ERIC Educational Resources Information Center

    Barth, Hilary; Beckmann, Lacey; Spelke, Elizabeth S.

    2008-01-01

    Do children draw upon abstract representations of number when they perform approximate arithmetic operations? In this study, kindergarten children viewed animations suggesting addition of a sequence of sounds to an array of dots, and they compared the sum to a second dot array that differed from the sum by 1 of 3 ratios. Children performed this…

  16. A Comparative Study of Video Presentation Modes in Relation to L2 Listening Success

    ERIC Educational Resources Information Center

    Li, Chen-Hong

    2016-01-01

    Video comprehension involves interpreting both sounds and images. Research has shown that processing an aural text with relevant pictorial information effectively enhances second/foreign language (L2) listening comprehension. A hypothesis underlying this mixed-methods study is that a visual-only silent film used as an advance organiser to activate…

  17. Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications

    ERIC Educational Resources Information Center

    Carrió Pastor, María Luisa; Mestre-Mestre, Eva María

    2014-01-01

    Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…

  18. Research study: STS-1 Orbiter Descent

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1981-01-01

    The conversion of STS-1 orbiter descent data from AVE-SESAME contact programs to the REEDA system and the reduction of raw radiosonde data is summarized. A first difference program, contact data program, plot data program, and 30 second data program were developed. Six radiosonde soundings were taken. An example of the outputs of each of the programs is presented.

  19. A Comparative Psychobiography of Hillary Clinton and Condoleezza Rice

    ERIC Educational Resources Information Center

    Fitch, Trey; Marshall, Jennifer

    2008-01-01

    The purpose of this study was to apply psychobiography to the lives of Hillary Clinton and Condoleezza Rice. Psychobiography can be applied as a method of teaching personality theory and it can also be used as a research method. Most political personas are crafted through 20 second sound bites from the radio, Internet, and television. However, a…

  20. The Keyword Method of Vocabulary Acquisition: An Experimental Evaluation.

    ERIC Educational Resources Information Center

    Griffith, Douglas

    The keyword method of vocabulary acquisition is a two-step mnemonic technique for learning vocabulary terms. The first step, the acoustic link, generates a keyword based on the sound of the foreign word. The second step, the imagery link, ties the keyword to the meaning of the item to be learned, via an interactive visual image or other…

Top