Sample records for audio frequency

  1. Ultrasonic speech translator and communications system

    DOEpatents

    Akerman, M.A.; Ayers, C.W.; Haynes, H.D.

    1996-07-23

    A wireless communication system undetectable by radio frequency methods for converting audio signals, including human voice, to electronic signals in the ultrasonic frequency range, transmitting the ultrasonic signal by way of acoustical pressure waves across a carrier medium, including gases, liquids, or solids, and reconverting the ultrasonic acoustical pressure waves back to the original audio signal. The ultrasonic speech translator and communication system includes an ultrasonic transmitting device and an ultrasonic receiving device. The ultrasonic transmitting device accepts as input an audio signal such as human voice input from a microphone or tape deck. The ultrasonic transmitting device frequency modulates an ultrasonic carrier signal with the audio signal producing a frequency modulated ultrasonic carrier signal, which is transmitted via acoustical pressure waves across a carrier medium such as gases, liquids or solids. The ultrasonic receiving device converts the frequency modulated ultrasonic acoustical pressure waves to a frequency modulated electronic signal, demodulates the audio signal from the ultrasonic carrier signal, and conditions the demodulated audio signal to reproduce the original audio signal at its output. 7 figs.

  2. Ultrasonic speech translator and communications system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akerman, M.A.; Ayers, C.W.; Haynes, H.D.

    1996-07-23

    A wireless communication system undetectable by radio frequency methods for converting audio signals, including human voice, to electronic signals in the ultrasonic frequency range, transmitting the ultrasonic signal by way of acoustical pressure waves across a carrier medium, including gases, liquids, or solids, and reconverting the ultrasonic acoustical pressure waves back to the original audio signal. The ultrasonic speech translator and communication system includes an ultrasonic transmitting device and an ultrasonic receiving device. The ultrasonic transmitting device accepts as input an audio signal such as human voice input from a microphone or tape deck. The ultrasonic transmitting device frequency modulatesmore » an ultrasonic carrier signal with the audio signal producing a frequency modulated ultrasonic carrier signal, which is transmitted via acoustical pressure waves across a carrier medium such as gases, liquids or solids. The ultrasonic receiving device converts the frequency modulated ultrasonic acoustical pressure waves to a frequency modulated electronic signal, demodulates the audio signal from the ultrasonic carrier signal, and conditions the demodulated audio signal to reproduce the original audio signal at its output. 7 figs.« less

  3. Ultrasonic speech translator and communications system

    DOEpatents

    Akerman, M. Alfred; Ayers, Curtis W.; Haynes, Howard D.

    1996-01-01

    A wireless communication system undetectable by radio frequency methods for converting audio signals, including human voice, to electronic signals in the ultrasonic frequency range, transmitting the ultrasonic signal by way of acoustical pressure waves across a carrier medium, including gases, liquids, or solids, and reconverting the ultrasonic acoustical pressure waves back to the original audio signal. The ultrasonic speech translator and communication system (20) includes an ultrasonic transmitting device (100) and an ultrasonic receiving device (200). The ultrasonic transmitting device (100) accepts as input (115) an audio signal such as human voice input from a microphone (114) or tape deck. The ultrasonic transmitting device (100) frequency modulates an ultrasonic carrier signal with the audio signal producing a frequency modulated ultrasonic carrier signal, which is transmitted via acoustical pressure waves across a carrier medium such as gases, liquids or solids. The ultrasonic receiving device (200) converts the frequency modulated ultrasonic acoustical pressure waves to a frequency modulated electronic signal, demodulates the audio signal from the ultrasonic carrier signal, and conditions the demodulated audio signal to reproduce the original audio signal at its output (250).

  4. 47 CFR 95.637 - Modulation standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...

  5. 47 CFR 95.637 - Modulation standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...

  6. 47 CFR 95.637 - Modulation standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...

  7. 47 CFR 95.637 - Modulation standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... frequency deviation of plus or minus 2.5 kHz, and the audio frequency response must not exceed 3.125 kHz..., must automatically prevent a greater than normal audio level from causing overmodulation. The transmitter also must include audio frequency low pass filtering, unless it complies with the applicable...

  8. Audio Frequency Analysis in Mobile Phones

    ERIC Educational Resources Information Center

    Aguilar, Horacio Munguía

    2016-01-01

    A new experiment using mobile phones is proposed in which its audio frequency response is analyzed using the audio port for inputting external signal and getting a measurable output. This experiment shows how the limited audio bandwidth used in mobile telephony is the main cause of the poor speech quality in this service. A brief discussion is…

  9. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  10. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  11. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  12. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Nominal carrier frequencies shall be integral multiples of 5 kHz. (2) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz and the lower... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  13. 47 CFR 73.757 - System specifications for single-sideband (SSB) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... emission is one giving the same audio-frequency signal-to-noise ratio at the receiver output as the... is equally valid for both DSB and SSB emissions. (3) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz with a further slope of...

  14. 47 CFR 73.757 - System specifications for single-sideband (SSB) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... emission is one giving the same audio-frequency signal-to-noise ratio at the receiver output as the... is equally valid for both DSB and SSB emissions. (3) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz with a further slope of...

  15. 47 CFR 73.757 - System specifications for single-sideband (SSB) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... emission is one giving the same audio-frequency signal-to-noise ratio at the receiver output as the... is equally valid for both DSB and SSB emissions. (3) Audio-frequency band. The upper limit of the audio-frequency band (at—3 dB) of the transmitter shall not exceed 4.5 kHz with a further slope of...

  16. 47 CFR 80.74 - Public coast station facilities for a telephony busy signal.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...), must consist of the transmission of a single audio frequency regularly interrupted, as follows: (a) Audio frequency. Not less than 100 nor more than 1100 Hertz, provided the frequency used for this...

  17. 47 CFR 80.74 - Public coast station facilities for a telephony busy signal.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...), must consist of the transmission of a single audio frequency regularly interrupted, as follows: (a) Audio frequency. Not less than 100 nor more than 1100 Hertz, provided the frequency used for this...

  18. 47 CFR 80.74 - Public coast station facilities for a telephony busy signal.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...), must consist of the transmission of a single audio frequency regularly interrupted, as follows: (a) Audio frequency. Not less than 100 nor more than 1100 Hertz, provided the frequency used for this...

  19. 47 CFR 80.74 - Public coast station facilities for a telephony busy signal.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...), must consist of the transmission of a single audio frequency regularly interrupted, as follows: (a) Audio frequency. Not less than 100 nor more than 1100 Hertz, provided the frequency used for this...

  20. 47 CFR 80.74 - Public coast station facilities for a telephony busy signal.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...), must consist of the transmission of a single audio frequency regularly interrupted, as follows: (a) Audio frequency. Not less than 100 nor more than 1100 Hertz, provided the frequency used for this...

  1. High-Resolution Audio with Inaudible High-Frequency Components Induces a Relaxed Attentional State without Conscious Awareness.

    PubMed

    Kuribayashi, Ryuma; Nittono, Hiroshi

    2017-01-01

    High-resolution audio has a higher sampling frequency and a greater bit depth than conventional low-resolution audio such as compact disks. The higher sampling frequency enables inaudible sound components (above 20 kHz) that are cut off in low-resolution audio to be reproduced. Previous studies of high-resolution audio have mainly focused on the effect of such high-frequency components. It is known that alpha-band power in a human electroencephalogram (EEG) is larger when the inaudible high-frequency components are present than when they are absent. Traditionally, alpha-band EEG activity has been associated with arousal level. However, no previous studies have explored whether sound sources with high-frequency components affect the arousal level of listeners. The present study examined this possibility by having 22 participants listen to two types of a 400-s musical excerpt of French Suite No. 5 by J. S. Bach (on cembalo, 24-bit quantization, 192 kHz A/D sampling), with or without inaudible high-frequency components, while performing a visual vigilance task. High-alpha (10.5-13 Hz) and low-beta (13-20 Hz) EEG powers were larger for the excerpt with high-frequency components than for the excerpt without them. Reaction times and error rates did not change during the task and were not different between the excerpts. The amplitude of the P3 component elicited by target stimuli in the vigilance task increased in the second half of the listening period for the excerpt with high-frequency components, whereas no such P3 amplitude change was observed for the other excerpt without them. The participants did not distinguish between these excerpts in terms of sound quality. Only a subjective rating of inactive pleasantness after listening was higher for the excerpt with high-frequency components than for the other excerpt. The present study shows that high-resolution audio that retains high-frequency components has an advantage over similar and indistinguishable digital sound sources in which such components are artificially cut off, suggesting that high-resolution audio with inaudible high-frequency components induces a relaxed attentional state without conscious awareness.

  2. Acoustic signal recovery by thermal demodulation

    NASA Astrophysics Data System (ADS)

    Boullosa, R. R.; Santillán, Arturo O.

    2006-10-01

    One operating mode of recently developed thermoacoustic transducers is as an audio speaker that uses an input superimposed on a direct current; as a result, the audio signal occurs at the same frequency as the input signal. To extend the potential applications of these kinds of sources, the authors propose an alternative driving mode in which a simple thermoacoustic device, consisting of a metal film over a substrate and a heat sink, is excited with a high frequency sinusoid that is amplitude modulated by a lower frequency signal. They show that the modulating signal is recovered in the radiated waves due to a mechanism that is inherent to this type of thermoacoustic process. If the frequency of the carrier is higher than 30kHz and any modulating signal (the one of interest) is in the audio frequency range, only this signal will be heard. Thus, the thermoacoustic device operates as an audio-band, self-demodulating speaker.

  3. Modified DCTNet for audio signals classification

    NASA Astrophysics Data System (ADS)

    Xian, Yin; Pu, Yunchen; Gan, Zhe; Lu, Liang; Thompson, Andrew

    2016-10-01

    In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to human audio perception than features such as Mel-frequency spectral coefficients (MFSC). We use features extracted by the A-DCTNet as input for classifiers. Experimental results show that the A-DCTNet and Recurrent Neural Networks (RNN) achieve state-of-the-art performance in bird song classification rate, and improve artist identification accuracy in music data. They demonstrate A-DCTNet's applicability to signal processing problems.

  4. TECHNICAL NOTE: Portable audio electronics for impedance-based measurements in microfluidics

    NASA Astrophysics Data System (ADS)

    Wood, Paul; Sinton, David

    2010-08-01

    We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1-50 mM), flow rate (2-120 µL min-1) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ~10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems.

  5. Worldwide survey of direct-to-listener digital audio delivery systems development since WARC-1992

    NASA Technical Reports Server (NTRS)

    Messer, Dion D.

    1993-01-01

    Each country was allocated frequency band(s) for direct-to-listener digital audio broadcasting at WARC-92. These allocations were near 1500, 2300, and 2600 MHz. In addition, some countries are encouraging the development of digital audio broadcasting services for terrestrial delivery only in the VHF bands (at frequencies from roughly 50 to 300 MHz) and in the medium-wave broadcasting band (AM band) (from roughly 0.5 to 1.7 MHz). The development activity increase was explosive. Current development, as of February 1993, as it is known to the author is summarized. The information given includes the following characteristics, as appropriate, for each planned system: coverage areas, audio quality, number of audio channels, delivery via satellite/terrestrial or both, carrier frequency bands, modulation methods, source coding, and channel coding. Most proponents claim that they will be operational in 3 or 4 years.

  6. Flow control using audio tones in resonant microfluidic networks: towards cell-phone controlled lab-on-a-chip devices.

    PubMed

    Phillips, Reid H; Jain, Rahil; Browning, Yoni; Shah, Rachana; Kauffman, Peter; Dinh, Doan; Lutz, Barry R

    2016-08-16

    Fluid control remains a challenge in development of portable lab-on-a-chip devices. Here, we show that microfluidic networks driven by single-frequency audio tones create resonant oscillating flow that is predicted by equivalent electrical circuit models. We fabricated microfluidic devices with fluidic resistors (R), inductors (L), and capacitors (C) to create RLC networks with band-pass resonance in the audible frequency range available on portable audio devices. Microfluidic devices were fabricated from laser-cut adhesive plastic, and a "buzzer" was glued to a diaphragm (capacitor) to integrate the actuator on the device. The AC flowrate magnitude was measured by imaging oscillation of bead tracers to allow direct comparison to the RLC circuit model across the frequency range. We present a systematic build-up from single-channel systems to multi-channel (3-channel) networks, and show that RLC circuit models predict complex frequency-dependent interactions within multi-channel networks. Finally, we show that adding flow rectifying valves to the network creates pumps that can be driven by amplified and non-amplified audio tones from common audio devices (iPod and iPhone). This work shows that RLC circuit models predict resonant flow responses in multi-channel fluidic networks as a step towards microfluidic devices controlled by audio tones.

  7. Comparisons of Audio and Audiovisual Measures of Stuttering Frequency and Severity in Preschool-Age Children

    ERIC Educational Resources Information Center

    Rousseau, Isabelle; Onslow, Mark; Packman, Ann; Jones, Mark

    2008-01-01

    Purpose: To determine whether measures of stuttering frequency and measures of overall stuttering severity in preschoolers differ when made from audio-only recordings compared with audiovisual recordings. Method: Four blinded speech-language pathologists who had extensive experience with preschoolers who stutter measured stuttering frequency and…

  8. Perceptual Audio Hashing Functions

    NASA Astrophysics Data System (ADS)

    Özer, Hamza; Sankur, Bülent; Memon, Nasir; Anarım, Emin

    2005-12-01

    Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  9. Characteristics of audio and sub-audio telluric signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Telford, W.M.

    1977-06-01

    Telluric current measurements in the audio and sub-audio frequency range, made in various parts of Canada and South America over the past four years, indicate that the signal amplitude is relatively uniform over 6 to 8 midday hours (LMT) except in Chile and that the signal anisotropy is reasonably constant in azimuth.

  10. Harmonic Characteristics of Rectifier Substations and Their Impact on Audio Frequency Track Circuits

    DOT National Transportation Integrated Search

    1982-05-01

    This report describes the basic operation of substation rectifier equipment and the modes of possible interference with audio frequency track circuits used for train detection, cab signalling, and vehicle speed control. It also includes methods of es...

  11. 76 FR 57923 - Establishment of Rules and Policies for the Satellite Digital Audio Radio Service in the 2310...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ... Rules and Policies for the Satellite Digital Audio Radio Service in the 2310-2360 MHz Frequency Band... Digital Audio Radio Service (SDARS) Second Report and Order. The information collection requirements were... of these rule sections. See Satellite Digital Audio Radio Service (SDARS) Second Report and Order...

  12. 47 CFR Figure 2 to Subpart N of... - Typical Audio Wave

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Typical Audio Wave 2 Figure 2 to Subpart N of Part 2 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO... Audio Wave EC03JN91.006 ...

  13. Versatile Experimental Kevlar Array Hydrophones: USRD Type H78

    DTIC Science & Technology

    1979-04-05

    the design of a small deop-submergence noise-measuring hydropl,one for the infra - sonic and low-audio frequency range, three hydrophone...llenriquez and L.-E. Ivey, -Standard Ilydrophone for the Infrasonic and Audio- F.-equency Range at H~ydrostatic Pressure to 10,000 psig," J. A cous. qoc. Am...Piezoelectric Ceramic Ilydrophone for Infrasonic and Audio Frequencies IJSRD Type 1148," NRL Report 7260, 15 Mar. 1971. 9. S.W. Meeks and R.W. Timme, "Effects

  14. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    NASA Astrophysics Data System (ADS)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio-magnetotelluric time-series, providing the means to assess quality of response functions obtained through processing.

  15. Tape recorder effects on jitter and shimmer extraction.

    PubMed

    Doherty, E T; Shipp, T

    1988-09-01

    To test for possible contamination of acoustic analyses by record/reproduce systems, five sine waves of fixed frequency and amplitude were sampled directly by a computer and recorded simultaneously on four different tape formats (audio and FM reel-to-reel, audio cassette, and video cassette using pulse code modulation). Recordings were digitized on playback and with the direct samples analyzed for fundamental frequency, amplitude, jitter, and shimmer using a zero crossing interpolation scheme. Distortion introduced by any of the data acquisition systems is negligible when extracting average fundamental frequency or average amplitude. For jitter and shimmer estimation, direct sampling or the use of a video cassette recorder with pulse code modulation are clearly superior. FM recorders, although not quite as accurate, provide a satisfactory alternative to those methods. Audio reel-to-reel recordings are marginally adequate for jitter analysis whereas audio cassette recorders can introduce jitter and shimmer values that are greater than some reported values for normal talkers.

  16. Using an Acoustic System to Estimate the Timing and Magnitude of Ebullition Release from Wetland Ecosystems

    NASA Astrophysics Data System (ADS)

    Varner, R. K.; Palace, M. W.; Lennartz, J. M.; Crill, P. M.; Wik, M.; Amante, J.; Dorich, C.; Harden, J. W.; Ewing, S. A.; Turetsky, M. R.

    2011-12-01

    Knowledge of the magnitude and frequency of methane release through ebullition (bubbling) in water saturated ecosystems such as bogs, fens and lakes is important to both the atmospheric and ecosystems science community. The controls on episodic bubble releases must be identified in order to understand the response of these ecosystems to future climate forcing. We have developed and field tested an inexpensive array of sampling/monitoring instruments to identify the frequency and magnitude of bubbling events which allows us to correlate bubble data with potential drivers such as changes in hydrostatic pressure, wind and temperature. A prototype ebullition sensor has been developed and field tested at Sallie's Fen in New Hampshire, USA. The instrument consists of a nested, inverted funnel design with a hydrophone for detecting bubbles rising through the peat, that hit the microphone. The design also offers a way to sample the gases collected from the funnels to determine the concentration of CH4. Laboratory calibration of the instrument resulted in an equation that relates frequency of bubbles hitting the microphone with bubble volume. After calibration in the laboratory, the prototype was deployed in Sallie's Fen in late August 2010. An additional four instruments were deployed the following month. Audio data was recorded continuously using a digital audio recorder attached to two ebullition sensors. Audio was recorded as an mp3 compressed audio file at a sample rate of 160 kbits/sec. Using this format and stereo input, allowing for two sensors to be recorded with each device, we were able to record continuously for 20 days. Audio was converted to uncompressed audio files for speed in computation. Audio data was processed using MATLAB, searching in 0.5 second incremental sections for specific fundamental frequencies that are related to our calibrated audio events. Time, fundamental frequency, and estimated bubble size were output to a text file for analysis in statistical software. In addition, each event was cut out of the longer audio file and placed in a directory with number of ebullition event, sensor number, and time, allowing for manual interpretation of the ebullition event. After successful laboratory and local field testing, our instruments were deployed in summer 2011 at a temperate fen (Sallie's Fen, NH, USA), a subarctic mire and lake (Stordalen, Abisko, Sweden) and two locations in subarctic Alaska (APEX Research Site, Fairbanks, AK and Innoko National Wildlife Refuge). Ebullition occurred at regular intervals. Our results indicate that this is a useful method for monitoring CH4 ebullitive flux at high temporal frequencies.

  17. Audio-vocal responses of vocal fundamental frequency and formant during sustained vowel vocalizations in different noises.

    PubMed

    Lee, Shao-Hsuan; Hsiao, Tzu-Yu; Lee, Guo-She

    2015-06-01

    Sustained vocalizations of vowels [a], [i], and syllable [mə] were collected in twenty normal-hearing individuals. On vocalizations, five conditions of different audio-vocal feedback were introduced separately to the speakers including no masking, wearing supra-aural headphones only, speech-noise masking, high-pass noise masking, and broad-band-noise masking. Power spectral analysis of vocal fundamental frequency (F0) was used to evaluate the modulations of F0 and linear-predictive-coding was used to acquire first two formants. The results showed that while the formant frequencies were not significantly shifted, low-frequency modulations (<3 Hz) of F0 significantly increased with reduced audio-vocal feedback across speech sounds and were significantly correlated with auditory awareness of speakers' own voices. For sustained speech production, the motor speech controls on F0 may depend on a feedback mechanism while articulation should rely more on a feedforward mechanism. Power spectral analysis of F0 might be applied to evaluate audio-vocal control for various hearing and neurological disorders in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Audio fingerprint extraction for content identification

    NASA Astrophysics Data System (ADS)

    Shiu, Yu; Yeh, Chia-Hung; Kuo, C. C. J.

    2003-11-01

    In this work, we present an audio content identification system that identifies some unknown audio material by comparing its fingerprint with those extracted off-line and saved in the music database. We will describe in detail the procedure to extract audio fingerprints and demonstrate that they are robust to noise and content-preserving manipulations. The main feature in the proposed system is the zero-crossing rate extracted with the octave-band filter bank. The zero-crossing rate can be used to describe the dominant frequency in each subband with a very low computational cost. The size of audio fingerprint is small and can be efficiently stored along with the compressed files in the database. It is also robust to many modifications such as tempo change and time-alignment distortion. Besides, the octave-band filter bank is used to enhance the robustness to distortion, especially those localized on some frequency regions.

  19. 47 CFR 11.32 - EAS Encoder.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... used for audio messages and at least one input port used for data messages. (3) Outputs. The encoder shall have at least one audio output port and at least one data output port. (4) Calibration. EAS... that complies with the following: (i) Tone Frequencies. The audio tones shall have fundamental...

  20. 47 CFR 11.32 - EAS Encoder.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... used for audio messages and at least one input port used for data messages. (3) Outputs. The encoder shall have at least one audio output port and at least one data output port. (4) Calibration. EAS... that complies with the following: (i) Tone Frequencies. The audio tones shall have fundamental...

  1. 47 CFR 11.32 - EAS Encoder.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... used for audio messages and at least one input port used for data messages. (3) Outputs. The encoder shall have at least one audio output port and at least one data output port. (4) Calibration. EAS... that complies with the following: (i) Tone Frequencies. The audio tones shall have fundamental...

  2. 47 CFR 73.128 - AM stereophonic broadcasting.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... channel reversed. (iii) Left and Right Channel only, under all conditions of modulation for the... (NRSC-1). (2) The left and right channel audio signals shall conform to frequency response limitations...)=audio signal left channel, R(t)=audio signal right channel, m=modulation factor, and mpeak(L(t)+R(t))=1...

  3. Automatic Detection and Classification of Audio Events for Road Surveillance Applications.

    PubMed

    Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine

    2018-06-06

    This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.

  4. 47 CFR 73.757 - System specifications for single-sideband (SSB) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... dB per octave. (4) Modulation processing. If audio-frequency signal processing is used, the dynamic... broadcasting service. (a) System parameters—(1) Channel spacing. In a mixed DSB, SSB and digital environment... emission is one giving the same audio-frequency signal-to-noise ratio at the receiver output as the...

  5. 47 CFR 73.128 - AM stereophonic broadcasting.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... negative peaks of 100%. (ii) Stereophonic (L−R) modulated with audio tones of the same amplitude at the... characteristics: (1) The audio response of the main (L+R) channel shall conform to the requirements of the ANSI... (NRSC-1). (2) The left and right channel audio signals shall conform to frequency response limitations...

  6. INSPIRE

    NASA Technical Reports Server (NTRS)

    Taylor, Bill; Pine, Bill

    2003-01-01

    INSPIRE (Interactive NASA Space Physics Ionosphere Radio Experiment - http://image.gsfc.nasa.gov/poetry/inspire) is a non-profit scientific, educational organization whose objective is to bring the excitement of observing natural and manmade radio waves in the audio region to high school students and others. The project consists of building an audio frequency radio receiver kit, making observations of natural and manmade radio waves and analyzing the data. Students also learn about NASA and our natural environment through the study of lightning, the source of many of the audio frequency waves, the atmosphere, the ionosphere, and the magnetosphere where the waves travel.

  7. Imaging of conductivity distributions using audio-frequency electromagnetic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ki Ha; Morrison, H.F.

    1990-10-01

    The objective of this study has been to develop mathematical methods for mapping conductivity distributions between boreholes using low frequency electromagnetic (em) data. In relation to this objective this paper presents two recent developments in high-resolution crosshole em imaging techniques. These are (1) audio-frequency diffusion tomography, and (2) a transform method in which low frequency data is first transformed into a wave-like field. The idea in the second approach is that we can then treat the transformed field using conventional techniques designed for wave field analysis.

  8. Noncontact modal analysis of a pipe organ reed using airborne ultrasound stimulated vibrometry

    NASA Astrophysics Data System (ADS)

    Huber, Thomas M.; Fatemi, Mostafa; Kinnick, Randall R.; Greenleaf, James F.

    2004-05-01

    The goal of this experiment was to excite and measure, in a noncontact manner, the vibrational modes of the reed from a reed organ pipe. To perform ultrasound stimulated excitation, two ultrasound beams in air of different frequencies were directed at the reed; the audio-range beat frequency between these ultrasound beams induced vibrations. The resulting vibrational deflection shapes were measured with a scanning vibrometer. The modes of any relatively small object can be studied in air using this technique. For a 36 mm by 7 mm clamped brass reed cantilever, displacements and velocites of 5 μ and 4 mm/s could be imparted at the fundamental frequency of 145 Hz. Using the same ultrasound transducer, excitation across the entire range of audio frequencies was obtained, which was not possible using audio excitation with a speaker. Since the beam was focused on the reed, ultrasound stimulated excitation eliminated background effects observed during mechanical shaker excitation, such as vibrations of clamps and supports. We will discuss the results obtained using single, dual, and confocal ultrasound transducers in AM and unmodulated CW modes, along with results obtained using a mechanical shaker and audio excitation using a speaker.

  9. A first demonstration of audio-frequency optical coherence elastography of tissue

    NASA Astrophysics Data System (ADS)

    Adie, Steven G.; Alexandrov, Sergey A.; Armstrong, Julian J.; Kennedy, Brendan F.; Sampson, David D.

    2008-12-01

    Optical elastography is aimed at using the visco-elastic properties of soft tissue as a contrast mechanism, and could be particularly suitable for high-resolution differentiation of tumour from surrounding normal tissue. We present a new approach to measure the effect of an applied stimulus in the kilohertz frequency range that is based on optical coherence tomography. We describe the approach and present the first in vivo optical coherence elastography measurements in human skin at audio excitation frequencies.

  10. DWT-Based High Capacity Audio Watermarking

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mehdi; Megías, David

    This letter suggests a novel high capacity robust audio watermarking algorithm by using the high frequency band of the wavelet decomposition, for which the human auditory system (HAS) is not very sensitive to alteration. The main idea is to divide the high frequency band into frames and then, for embedding, the wavelet samples are changed based on the average of the relevant frame. The experimental results show that the method has very high capacity (about 5.5kbps), without significant perceptual distortion (ODG in [-1, 0] and SNR about 33dB) and provides robustness against common audio signal processing such as added noise, filtering, echo and MPEG compression (MP3).

  11. 47 CFR 2.1047 - Measurements required: Modulation characteristics.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... equipment. A curve or equivalent data showing the frequency response of the audio modulating circuit over a range of 100 to 5000 Hz shall be submitted. For equipment required to have an audio low-pass filter, a...

  12. 47 CFR 2.1047 - Measurements required: Modulation characteristics.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... equipment. A curve or equivalent data showing the frequency response of the audio modulating circuit over a range of 100 to 5000 Hz shall be submitted. For equipment required to have an audio low-pass filter, a...

  13. Juno Listens to Jupiters Auroras

    NASA Image and Video Library

    2016-09-01

    During Juno's close flyby of Jupiter on August 27, 2016, the Waves instrument received radio signals associated with the giant planet's intense auroras. Animation and audio display the signals after they have been shifted into the audio frequency range.

  14. 47 CFR 73.758 - System specifications for digitally modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... digital audio broadcasting and datacasting are authorized. The RF requirements for the DRM system are... tolerance. The frequency tolerance shall be 10 Hz. See Section 73.757(b)(2), notes 1 and 2. (3) Audio... performance of a speech codec (of the order of 3 kHz). The choice of audio quality is connected to the needs...

  15. Audio frequency in vivo optical coherence elastography

    NASA Astrophysics Data System (ADS)

    Adie, Steven G.; Kennedy, Brendan F.; Armstrong, Julian J.; Alexandrov, Sergey A.; Sampson, David D.

    2009-05-01

    We present a new approach to optical coherence elastography (OCE), which probes the local elastic properties of tissue by using optical coherence tomography to measure the effect of an applied stimulus in the audio frequency range. We describe the approach, based on analysis of the Bessel frequency spectrum of the interferometric signal detected from scatterers undergoing periodic motion in response to an applied stimulus. We present quantitative results of sub-micron excitation at 820 Hz in a layered phantom and the first such measurements in human skin in vivo.

  16. Optimal Window and Lattice in Gabor Transform. Application to Audio Analysis.

    PubMed

    Lachambre, Helene; Ricaud, Benjamin; Stempfel, Guillaume; Torrésani, Bruno; Wiesmeyr, Christoph; Onchis-Moaca, Darian

    2015-01-01

    This article deals with the use of optimal lattice and optimal window in Discrete Gabor Transform computation. In the case of a generalized Gaussian window, extending earlier contributions, we introduce an additional local window adaptation technique for non-stationary signals. We illustrate our approach and the earlier one by addressing three time-frequency analysis problems to show the improvements achieved by the use of optimal lattice and window: close frequencies distinction, frequency estimation and SNR estimation. The results are presented, when possible, with real world audio signals.

  17. Ultrasonic Leak Detection System

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C. (Inventor); Moerk, J. Steven (Inventor)

    1998-01-01

    A system for detecting ultrasonic vibrations. such as those generated by a small leak in a pressurized container. vessel. pipe. or the like. comprises an ultrasonic transducer assembly and a processing circuit for converting transducer signals into an audio frequency range signal. The audio frequency range signal can be used to drive a pair of headphones worn by an operator. A diode rectifier based mixing circuit provides a simple, inexpensive way to mix the transducer signal with a square wave signal generated by an oscillator, and thereby generate the audio frequency signal. The sensitivity of the system is greatly increased through proper selection and matching of the system components. and the use of noise rejection filters and elements. In addition, a parabolic collecting horn is preferably employed which is mounted on the transducer assembly housing. The collecting horn increases sensitivity of the system by amplifying the received signals. and provides directionality which facilitates easier location of an ultrasonic vibration source.

  18. High-performance combination method of electric network frequency and phase for audio forgery detection in battery-powered devices.

    PubMed

    Savari, Maryam; Abdul Wahab, Ainuddin Wahid; Anuar, Nor Badrul

    2016-09-01

    Audio forgery is any act of tampering, illegal copy and fake quality in the audio in a criminal way. In the last decade, there has been increasing attention to the audio forgery detection due to a significant increase in the number of forge in different type of audio. There are a number of methods for forgery detection, which electric network frequency (ENF) is one of the powerful methods in this area for forgery detection in terms of accuracy. In spite of suitable accuracy of ENF in a majority of plug-in powered devices, the weak accuracy of ENF in audio forgery detection for battery-powered devices, especially in laptop and mobile phone, can be consider as one of the main obstacles of the ENF. To solve the ENF problem in terms of accuracy in battery-powered devices, a combination method of ENF and phase feature is proposed. From experiment conducted, ENF alone give 50% and 60% accuracy for forgery detection in mobile phone and laptop respectively, while the proposed method shows 88% and 92% accuracy respectively, for forgery detection in battery-powered devices. The results lead to higher accuracy for forgery detection with the combination of ENF and phase feature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Investigating Perceptual Biases, Data Reliability, and Data Discovery in a Methodology for Collecting Speech Errors From Audio Recordings.

    PubMed

    Alderete, John; Davies, Monica

    2018-04-01

    This work describes a methodology of collecting speech errors from audio recordings and investigates how some of its assumptions affect data quality and composition. Speech errors of all types (sound, lexical, syntactic, etc.) were collected by eight data collectors from audio recordings of unscripted English speech. Analysis of these errors showed that: (i) different listeners find different errors in the same audio recordings, but (ii) the frequencies of error patterns are similar across listeners; (iii) errors collected "online" using on the spot observational techniques are more likely to be affected by perceptual biases than "offline" errors collected from audio recordings; and (iv) datasets built from audio recordings can be explored and extended in a number of ways that traditional corpus studies cannot be.

  20. Instrumental Landing Using Audio Indication

    NASA Astrophysics Data System (ADS)

    Burlak, E. A.; Nabatchikov, A. M.; Korsun, O. N.

    2018-02-01

    The paper proposes an audio indication method for presenting to a pilot the information regarding the relative positions of an aircraft in the tasks of precision piloting. The implementation of the method is presented, the use of such parameters of audio signal as loudness, frequency and modulation are discussed. To confirm the operability of the audio indication channel the experiments using modern aircraft simulation facility were carried out. The simulated performed the instrument landing using the proposed audio method to indicate the aircraft deviations in relation to the slide path. The results proved compatible with the simulated instrumental landings using the traditional glidescope pointers. It inspires to develop the method in order to solve other precision piloting tasks.

  1. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  2. Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat

    PubMed Central

    Zuk, Nathaniel J.; Carney, Laurel H.; Lalor, Edmund C.

    2018-01-01

    Prior research has shown that musical beats are salient at the level of the cortex in humans. Yet below the cortex there is considerable sub-cortical processing that could influence beat perception. Some biases, such as a tempo preference and an audio frequency bias for beat timing, could result from sub-cortical processing. Here, we used models of the auditory-nerve and midbrain-level amplitude modulation filtering to simulate sub-cortical neural activity to various beat-inducing stimuli, and we used the simulated activity to determine the tempo or beat frequency of the music. First, irrespective of the stimulus being presented, the preferred tempo was around 100 beats per minute, which is within the range of tempi where tempo discrimination and tapping accuracy are optimal. Second, sub-cortical processing predicted a stronger influence of lower audio frequencies on beat perception. However, the tempo identification algorithm that was optimized for simple stimuli often failed for recordings of music. For music, the most highly synchronized model activity occurred at a multiple of the beat frequency. Using bottom-up processes alone is insufficient to produce beat-locked activity. Instead, a learned and possibly top-down mechanism that scales the synchronization frequency to derive the beat frequency greatly improves the performance of tempo identification. PMID:29896080

  3. Orbital component extraction by time-variant sinusoidal modeling.

    NASA Astrophysics Data System (ADS)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-04-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on the (Fast) Fourier Transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic makes it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. Here, we circumvent this drawback by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach has been proven useful to characterize audio signals (music and speech), which are non-stationary in nature (Zivanovic and Schoukens, 2010, 2012). Paleoclimate proxy signals and audio signals have in nature similar dynamics; the only difference is the frequency relationship between the different components. A harmonic frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, the latter difference is irrelevant for the problem at hand. Using a sliding window approach, the model captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretation, whereas the latter are estimated by means of linear least-squares. As an output, the model provides the orbital component waveform, either in the depth or time domain. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns can be used to reconstruct changes in accumulation rate, whereas amplitude modulation can be used to reconstruct e.g. eccentricity-modulated precession. The time-variant sinusoidal model is applied to well-established Pleistocene benthic isotope records to evaluate its performance. Zivanovic M. and Schoukens J. (2010) On The Polynomial Approximation for Time-Variant Harmonic Signal Modeling. IEEE Transactions On Audio, Speech, and Language Processing vol. 19, no. 3, pp. 458-467. Doi: 10.1109/TASL.2010.2049673. Zivanovic M. and Schoukens J. (2012) Single and Piecewise Polynomials for Modeling of Pitched Sounds. IEEE Transactions On Audio, Speech, and Language Processing vol. 20, no. 4, pp. 1270-1281. Doi: 10.1109/TASL.2011.2174228.

  4. 76 FR 67070 - Operation of Wireless Communications Services in the 2.3 GHz Band; Establishment of Rules and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-31

    ... 2.3 GHz Band; Establishment of Rules and Policies for the Digital Audio Radio Satellite Service in...; Establishment of Rules and Policies for the Digital Audio Radio Satellite Service in the 2310-2360 MHz Frequency...

  5. Data reduction for cough studies using distribution of audio frequency content

    PubMed Central

    2012-01-01

    Background Recent studies suggest that objectively quantifying coughing in audio recordings offers a novel means to understand coughing and assess treatments. Currently, manual cough counting is the most accurate method for quantifying coughing. However, the demand of manually counting cough records is substantial, demonstrating a need to reduce record lengths prior to counting whilst preserving the coughs within them. This study tested the performance of an algorithm developed for this purpose. Methods 20 subjects were recruited (5 healthy smokers and non-smokers, 5 chronic cough, 5 chronic obstructive pulmonary disease and 5 asthma), fitted with an ambulatory recording system and recorded for 24 hours. The recordings produced were divided into 15 min segments and counted. Periods of inactive audio in each segment were removed using the median frequency and power of the audio signal and the resulting files re-counted. Results The median resultant segment length was 13.9 s (IQR 56.4 s) and median 24 hr recording length 62.4 min (IQR 100.4). A median of 0.0 coughs/h (IQR 0.0-0.2) were erroneously removed and the variability in the resultant cough counts was comparable to that between manual cough counts. The largest error was seen in asthmatic patients, but still only 1.0% coughs/h were missed. Conclusions These data show that a system which measures signal activity using the median audio frequency can substantially reduce record lengths without significantly compromising the coughs contained within them. PMID:23231789

  6. Design of an audio advertisement dataset

    NASA Astrophysics Data System (ADS)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  7. Detection of emetic activity in the cat by monitoring venous pressure and audio signals

    NASA Technical Reports Server (NTRS)

    Nagahara, A.; Fox, Robert A.; Daunton, Nancy G.; Elfar, S.

    1991-01-01

    To investigate the use of audio signals as a simple, noninvasive measure of emetic activity, the relationship between the somatic events and sounds associated with retching and vomiting was studied. Thoracic venous pressure obtained from an implanted external jugular catheter was shown to provide a precise measure of the somatic events associated with retching and vomiting. Changes in thoracic venous pressure monitored through an indwelling external jugular catheter with audio signals, obtained from a microphone located above the animal in a test chamber, were compared. In addition, two independent observers visually monitored emetic episodes. Retching and vomiting were induced by injection of xylazine (0.66mg/kg s.c.), or by motion. A unique audio signal at a frequency of approximately 250 Hz is produced at the time of the negative thoracic venous pressure change associated with retching. Sounds with higher frequencies (around 2500 Hz) occur in conjunction with the positive pressure changes associated with vomiting. These specific signals could be discriminated reliably by individuals reviewing the audio recordings of the sessions. Retching and those emetic episodes associated with positive venous pressure changes were detected accurately by audio monitoring, with 90 percent of retches and 100 percent of emetic episodes correctly identified. Retching was detected more accurately (p is less than .05) by audio monitoring than by direct visual observation. However, with visual observation a few incidents in which stomach contents were expelled in the absence of positive pressure changes or detectable sounds were identified. These data suggest that in emetic situations, the expulsion of stomach contents may be accomplished by more than one neuromuscular system and that audio signals can be used to detect emetic episodes associated with thoracic venous pressure changes.

  8. Design and implementation of an audio indicator

    NASA Astrophysics Data System (ADS)

    Zheng, Shiyong; Li, Zhao; Li, Biqing

    2017-04-01

    This page proposed an audio indicator which designed by using C9014, LED by operational amplifier level indicator, the decimal count/distributor of CD4017. The experimental can control audibly neon and holiday lights through the signal. Input audio signal after C9014 composed of operational amplifier for power amplifier, the adjust potentiometer extraction amplification signal input voltage CD4017 distributors make its drive to count, then connect the LED display running situation of the circuit. This simple audio indicator just use only U1 and can produce two colors LED with the audio signal tandem come pursuit of the running effect, from LED display the running of the situation takes can understand the general audio signal. The variation in the audio and the frequency of the signal and the corresponding level size. In this light can achieve jump to change, slowly, atlas, lighting four forms, used in home, hotel, discos, theater, advertising and other fields, and a wide range of USES, rU1h life in a modern society.

  9. Measurement of the dynamic input impedance of a dc superconducting quantum interference device at audio frequencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falferi, P.; Mezzena, R.; Vitale, S.

    1997-08-01

    The coupling effects of a commercial dc superconducting quantum interference device (SQUID) to an electrical LC resonator which operates at audio frequencies ({approx}1kHz) with quality factors Q{approx}10{sup 6} are presented. The variations of the resonance frequency of the resonator as functions of the flux applied to the SQUID are due to the SQUID dynamic inductance in good agreement with the predictions of a model. The variations of the quality factor point to a feedback mechanism between the output of the SQUID and the input circuit. {copyright} {ital 1997 American Institute of Physics.}

  10. Comparison of three orientation and mobility aids for individuals with blindness: Verbal description, audio-tactile map and audio-haptic map.

    PubMed

    Papadopoulos, Konstantinos; Koustriava, Eleni; Koukourikos, Panagiotis; Kartasidou, Lefkothea; Barouti, Marialena; Varveris, Asimis; Misiou, Marina; Zacharogeorga, Timoclia; Anastasiadis, Theocharis

    2017-01-01

    Disorientation and inability of wayfinding are phenomena with a great frequency for individuals with visual impairments during the process of travelling novel environments. Orientation and mobility aids could suggest important tools for the preparation of a more secure and cognitively mapped travelling. The aim of the present study was to examine if spatial knowledge structured after an individual with blindness had studied the map of an urban area that was delivered through a verbal description, an audio-tactile map or an audio-haptic map, could be used for detecting in the area specific points of interest. The effectiveness of the three aids with reference to each other was also examined. The results of the present study highlight the effectiveness of the audio-tactile and the audio-haptic maps as orientation and mobility aids, especially when these are compared to verbal descriptions.

  11. A high efficiency PWM CMOS class-D audio power amplifier

    NASA Astrophysics Data System (ADS)

    Zhangming, Zhu; Lianxi, Liu; Yintang, Yang; Han, Lei

    2009-02-01

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 μm CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 μA. The active area of the class-D audio power amplifier is about 1.47 × 1.52 mm2. With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  12. Audio-frequency analysis of inductive voltage dividers based on structural models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avramov, S.; Oldham, N.M.; Koffman, A.D.

    1994-12-31

    A Binary Inductive Voltage Divider (BIVD) is compared with a Decade Inductive Voltage Divider (DIVD) in an automatic IVD bridge. New detection and injection circuitry was designed and used to evaluate the IVDs with either the input or output tied to ground potential. In the audio frequency range the DIVD and BIVD error patterns are characterized for both in-phase and quadrature components. Differences between results obtained using a new error decomposition scheme based on structural modeling, and measurements using conventional IVD standards are reported.

  13. 47 CFR Figure 2 to Subpart N of... - Typical Audio Wave

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Typical Audio Wave 2 Figure 2 to Subpart N of Part 2 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO... Position Indicating Radiobeacons (EPIRBs) Pt. 2, Subpt. N, Fig. 2 Figure 2 to Subpart N of Part 2—Typical...

  14. Home telecare system using cable television plants--an experimental field trial.

    PubMed

    Lee, R G; Chen, H S; Lin, C C; Chang, K C; Chen, J H

    2000-03-01

    To solve the inconvenience of routine transportation of chronically ill and handicapped patients, this paper proposes a platform based on a hybrid fiber coaxial (HFC) network in Taiwan designed to make a home telecare system feasible. The aim of this home telecare system is to combine biomedical data, including three-channel electrocardiogram (ECG) and blood pressure (BP), video, and audio into a National Television Standard Committee (NTSC) channel for communication between the patient and healthcare provider. Digitized biomedical data and output from medical devices can be further modulated to a second audio program (SAP) subchannel which can be used for second-language audio in NTSC television signals. For long-distance transmission, we translate the digital biomedical data into the frequency domain using frequency shift key (FSK) technology and insert this signal into an SAP band. The whole system has been implemented and tested. The results obtained using this system clearly demonstrated that real-time video, audio, and biomedical data transmission are very clear with a carrier-to-noise ratio up to 43 dB.

  15. Direct broadcast satellite-audio, portable and mobile reception tradeoffs

    NASA Technical Reports Server (NTRS)

    Golshan, Nasser

    1992-01-01

    This paper reports on the findings of a systems tradeoffs study on direct broadcast satellite-radio (DBS-R). Based on emerging advanced subband and transform audio coding systems, four ranges of bit rates: 16-32 kbps, 48-64 kbps, 96-128 kbps and 196-256 kbps are identified for DBS-R. The corresponding grades of audio quality will be subjectively comparable to AM broadcasting, monophonic FM, stereophonic FM, and CD quality audio, respectively. The satellite EIRP's needed for mobile DBS-R reception in suburban areas are sufficient for portable reception in most single family houses when allowance is made for the higher G/T of portable table-top receivers. As an example, the variation of the space segment cost as a function of frequency, audio quality, coverage capacity, and beam size is explored for a typical DBS-R system.

  16. Detection and volume estimation of embolic air in the middle cerebral artery using transcranial Doppler sonography.

    PubMed

    Bunegin, L; Wahl, D; Albin, M S

    1994-03-01

    Cerebral embolism has been implicated in the development of cognitive and neurological deficits following bypass surgery. This study proposes methodology for estimating cerebral air embolus volume using transcranial Doppler sonography. Transcranial Doppler audio signals of air bubbles in the middle cerebral artery obtained from in vivo experiments were subjected to a fast-Fourier transform analysis. Audio segments when no air was present as well as artifact resulting from electrocautery and sensor movement were also subjected to fast-Fourier transform analysis. Spectra were compared, and frequency and power differences were noted and used for development of audio band-pass filters for isolation of frequencies associated with air emboli. In a bench model of the middle cerebral artery circulation, repetitive injections of various air volumes between 0.5 and 500 microL were made. Transcranial Doppler audio output was band-pass filtered, acquired digitally, then subjected to a fast-Fourier transform power spectrum analysis and power spectrum integration. A linear least-squares correlation was performed on the data. Fast-Fourier transform analysis of audio segments indicated that frequencies between 250 and 500 Hz are consistently dominant in the spectrum when air emboli are present. Background frequencies appear to be below 240 Hz, and artifact resulting from sensor movement and electrocautery appears to be below 300 Hz. Data from the middle cerebral artery model filtered through a 307- to 450-Hz band-pass filter yielded a linear relation between emboli volume and the integrated value of the power spectrum near 40 microL. Detection of emboli less than 0.5 microL was inconsistent, and embolus volumes greater than 40 microL were indistinguishable from one another. The preliminary technique described in this study may represent a starting point from which automated detection and volume estimation of cerebral emboli might be approached.

  17. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  18. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  19. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    PubMed

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study.

    PubMed

    Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M

    2017-11-01

    The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Spatial domain entertainment audio decompression/compression

    NASA Astrophysics Data System (ADS)

    Chan, Y. K.; Tam, Ka Him K.

    2014-02-01

    The ARM7 NEON processor with 128bit SIMD hardware accelerator requires a peak performance of 13.99 Mega Cycles per Second for MP3 stereo entertainment quality decoding. For similar compression bit rate, OGG and AAC is preferred over MP3. The Patent Cooperation Treaty Application dated 28/August/2012 describes an audio decompression scheme producing a sequence of interleaving "min to Max" and "Max to min" rising and falling segments. The number of interior audio samples bound by "min to Max" or "Max to min" can be {0|1|…|N} audio samples. The magnitudes of samples, including the bounding min and Max, are distributed as normalized constants within the 0 and 1 of the bounding magnitudes. The decompressed audio is then a "sequence of static segments" on a frame by frame basis. Some of these frames needed to be post processed to elevate high frequency. The post processing is compression efficiency neutral and the additional decoding complexity is only a small fraction of the overall decoding complexity without the need of extra hardware. Compression efficiency can be speculated as very high as source audio had been decimated and converted to a set of data with only "segment length and corresponding segment magnitude" attributes. The PCT describes how these two attributes are efficiently coded by the PCT innovative coding scheme. The PCT decoding efficiency is obviously very high and decoding latency is basically zero. Both hardware requirement and run time is at least an order of magnitude better than MP3 variants. The side benefit is ultra low power consumption on mobile device. The acid test on how such a simplistic waveform representation can indeed reproduce authentic decompressed quality is benchmarked versus OGG(aoTuv Beta 6.03) by three pair of stereo audio frames and one broadcast like voice audio frame with each frame consisting 2,028 samples at 44,100KHz sampling frequency.

  2. Noncontact modal analysis of a pipe organ reed using airborne ultrasound stimulated vibrometry.

    PubMed

    Huber, Thomas M; Fatemi, Mostafa; Kinnick, Randy; Greenleaf, James

    2006-04-01

    The goal of this study was to excite and measure, in a noncontact manner, the vibrational modes of the reed from a reed organ pipe. To perform ultrasound stimulated excitation, the audio-range difference frequency between a pair of ultrasound beams produced a radiation force that induced vibrations. The resulting vibrational deflection shapes were measured with a scanning laser vibrometer. The resonances of any relatively small object can be studied in air using this technique. For a 36 mm x 6 mm brass reed, displacements and velocities in excess of 5 microm and 4 mm/s could be imparted at the fundamental frequency of 145 Hz. Using the same ultrasound transducer, excitation across the entire range of audio frequencies was obtained. Since the beam was focused on the reed, ultrasound stimulated excitation eliminated background effects observed during mechanical shaker excitation, such as vibrations of clamps and supports. The results obtained using single, dual and confocal ultrasound transducers in AM and two-beam modes, along with results obtained using a mechanical shaker and audio excitation using a speaker are discussed.

  3. Robust Audio Watermarking by Using Low-Frequency Histogram

    NASA Astrophysics Data System (ADS)

    Xiang, Shijun

    In continuation to earlier work where the problem of time-scale modification (TSM) has been studied [1] by modifying the shape of audio time domain histogram, here we consider the additional ingredient of resisting additive noise-like operations, such as Gaussian noise, lossy compression and low-pass filtering. In other words, we study the problem of the watermark against both TSM and additive noises. To this end, in this paper we extract the histogram from a Gaussian-filtered low-frequency component for audio watermarking. The watermark is inserted by shaping the histogram in a way that the use of two consecutive bins as a group is exploited for hiding a bit by reassigning their population. The watermarked signals are perceptibly similar to the original one. Comparing with the previous time-domain watermarking scheme [1], the proposed watermarking method is more robust against additive noise, MP3 compression, low-pass filtering, etc.

  4. Tensorial dynamic time warping with articulation index representation for efficient audio-template learning.

    PubMed

    Le, Long N; Jones, Douglas L

    2018-03-01

    Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.

  5. Inexpensive Audio Activities: Earbud-based Sound Experiments

    NASA Astrophysics Data System (ADS)

    Allen, Joshua; Boucher, Alex; Meggison, Dean; Hruby, Kate; Vesenka, James

    2016-11-01

    Inexpensive alternatives to a number of classic introductory physics sound laboratories are presented including interference phenomena, resonance conditions, and frequency shifts. These can be created using earbuds, economical supplies such as Giant Pixie Stix® wrappers, and free software available for PCs and mobile devices. We describe two interference laboratories (beat frequency and two-speaker interference) and two resonance laboratories (quarter- and half-wavelength). Lastly, a Doppler laboratory using rotating earbuds is explained. The audio signal captured by all experiments is analyzed on free spectral analysis software and many of the experiments incorporate the unifying theme of measuring the speed of sound in air.

  6. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  7. Pre-patterned ZnO nanoribbons on soft substrates for stretchable energy harvesting applications

    NASA Astrophysics Data System (ADS)

    Ma, Teng; Wang, Yong; Tang, Rui; Yu, Hongyu; Jiang, Hanqing

    2013-05-01

    Three pre-patterned ZnO nanoribbons in different configurations were studied in this paper, including (a) straight ZnO nanoribbons uniformly bonded on soft substrates that form sinusoidal buckles, (b) straight ZnO nanoribbons selectively bonded on soft substrates that form pop-up buckles, and (c) serpentine ZnO nanoribbons bonded on soft substrates via anchors. The nonlinear dynamics and random analysis were conducted to obtain the fundamental frequencies and to evaluate their performance in energy harvesting applications. We found that pop-up buckles and overhanging serpentine structures are suitable for audio frequency energy harvesting applications. Remarkably, almost unchanged fundamental natural frequency upon strain is achieved by properly patterning ZnO nanoribbons, which initiates a new and exciting direction of stretchable energy harvesting using nano-scale materials in audio frequency range.

  8. Improved Audio Reproduction System

    NASA Technical Reports Server (NTRS)

    Chang, C. S.

    1972-01-01

    Circuitry utilizing electrical feedback of instantaneous speaker coil velocity compensates for loudspeaker resonance, transient peaks and frequency drop-off so that sounds of widely varying frequencies and amplitudes can be reproduced accurately from high fidelity recordings of any variety.

  9. Volterra model of the parametric array loudspeaker operating at ultrasonic frequencies.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2016-11-01

    The parametric array loudspeaker (PAL) is an application of the parametric acoustic array in air, which can be applied to transmit a narrow audio beam from an ultrasonic emitter. However, nonlinear distortion is very perceptible in the audio beam. Modulation methods to reduce the nonlinear distortion are available for on-axis far-field applications. For other applications, preprocessing techniques are wanting. In order to develop a preprocessing technique with general applicability to a wide range of operating conditions, the Volterra filter is investigated as a nonlinear model of the PAL in this paper. Limitations of the standard audio-to-audio Volterra filter are elaborated. An improved ultrasound-to-ultrasound Volterra filter is proposed and empirically demonstrated to be a more generic Volterra model of the PAL.

  10. Audio-guided audiovisual data segmentation, indexing, and retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-12-01

    While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.

  11. Acoustic analysis of speech under stress.

    PubMed

    Sondhi, Savita; Khan, Munna; Vijay, Ritu; Salhan, Ashok K; Chouhan, Satish

    2015-01-01

    When a person is emotionally charged, stress could be discerned in his voice. This paper presents a simplified and a non-invasive approach to detect psycho-physiological stress by monitoring the acoustic modifications during a stressful conversation. Voice database consists of audio clips from eight different popular FM broadcasts wherein the host of the show vexes the subjects who are otherwise unaware of the charade. The audio clips are obtained from real-life stressful conversations (no simulated emotions). Analysis is done using PRAAT software to evaluate mean fundamental frequency (F0) and formant frequencies (F1, F2, F3, F4) both in neutral and stressed state. Results suggest that F0 increases with stress; however, formant frequency decreases with stress. Comparison of Fourier and chirp spectra of short vowel segment shows that for relaxed speech, the two spectra are similar; however, for stressed speech, they differ in the high frequency range due to increased pitch modulation.

  12. 47 CFR 11.31 - EAS protocol.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...

  13. 47 CFR 11.31 - EAS protocol.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...

  14. 47 CFR 25.144 - Licensing provisions for the 2.3 GHz satellite digital audio radio service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25... frequencies and emission designators of such communications, and the frequencies and emission designators used... repeaters will communicate, the frequencies and emission designators of such communications, and the...

  15. 47 CFR 25.144 - Licensing provisions for the 2.3 GHz satellite digital audio radio service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25... of such communications, and the frequencies and emission designators used by the repeaters to re..., the frequencies and emission designators of such communications, and the frequencies and emission...

  16. 47 CFR 25.144 - Licensing provisions for the 2.3 GHz satellite digital audio radio service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25... frequencies and emission designators of such communications, and the frequencies and emission designators used... repeaters will communicate, the frequencies and emission designators of such communications, and the...

  17. 47 CFR 11.31 - EAS protocol.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...

  18. 47 CFR 11.31 - EAS protocol.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...

  19. 47 CFR 25.144 - Licensing provisions for the 2.3 GHz satellite digital audio radio service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25... frequencies and emission designators of such communications, and the frequencies and emission designators used... repeaters will communicate, the frequencies and emission designators of such communications, and the...

  20. 47 CFR 11.31 - EAS protocol.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...

  1. 47 CFR 25.144 - Licensing provisions for the 2.3 GHz satellite digital audio radio service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25... frequencies and emission designators of such communications, and the frequencies and emission designators used... repeaters will communicate, the frequencies and emission designators of such communications, and the...

  2. Challenges to the successful implementation of 3-D sound

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    1991-11-01

    The major challenges for the successful implementation of 3-D audio systems involve minimizing reversals, intracranially heard sound, and localization error for listeners. Designers of 3-D audio systems are faced with additional challenges in data reduction and low-frequency response characteristics. The relationship of the head-related transfer function (HRTF) to these challenges is shown, along with some preliminary psychoacoustic results gathered at NASA-Ames.

  3. An Efficient Audio Watermarking Algorithm in Frequency Domain for Copyright Protection

    NASA Astrophysics Data System (ADS)

    Dhar, Pranab Kumar; Khan, Mohammad Ibrahim; Kim, Cheol-Hong; Kim, Jong-Myon

    Digital Watermarking plays an important role for copyright protection of multimedia data. This paper proposes a new watermarking system in frequency domain for copyright protection of digital audio. In our proposed watermarking system, the original audio is segmented into non-overlapping frames. Watermarks are then embedded into the selected prominent peaks in the magnitude spectrum of each frame. Watermarks are extracted by performing the inverse operation of watermark embedding process. Simulation results indicate that the proposed watermarking system is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, MP3 compression, and low-pass filtering. Our proposed watermarking system outperforms Cox's method in terms of imperceptibility, while keeping comparable robustness with the Cox's method. Our proposed system achieves SNR (signal-to-noise ratio) values ranging from 20 dB to 28 dB, in contrast to Cox's method which achieves SNR values ranging from only 14 dB to 23 dB.

  4. Elastic Characterization of Concrete Materials

    NASA Astrophysics Data System (ADS)

    Guerra-Vela, Claudio; Ruiz, Abraham; Zypman, Fredy R.

    2001-03-01

    Many geographical locations share a common problem of high environmental humidity. It is thus desirable to build houses that can withstand strong water loading. In this work we study the evolution of High Performance Concrete as a function of hardening stage. The technique that we use is based on the propagation of resonant audio frequency modes of oscillation along the long axis of homemade HPC cylindrical samples. An audio generator fed piezoelectric (at one end of the rod) excites vibrations in the sample. Off resonance these vibrations do not propagate away from the piezoelectric site. On the other hand, when a resonance is reached the vibration extends all over the bar. A second piezoelectric is placed at the other extreme of the cylinder. We measure three parameters: the resonant frequency, speed of sound, and loss factor. To measure the resonant frequency we connect the two piezos to an oscilloscope in the x-y mode. At resonance the oscilloscope displays an ellipse and the audio generator reports the frequency. To measure the speed of sound, we excite the firs piezo with a pulse and measure the delay time in the second piezo. The loss factor can be extracted from the ratio of the exciting pulse and the measured one. From these parameters we calculate the Young modulus, the area moment of inertia and the effective density of the HPC. These quantities are measured twice a day during the 28-day hardening time.

  5. Audio-Enhanced Tablet Computers to Assess Children's Food Frequency From Migrant Farmworker Mothers.

    PubMed

    Kilanowski, Jill F; Trapl, Erika S; Kofron, Ryan M

    2013-06-01

    This study sought to improve data collection in children's food frequency surveys for non-English speaking immigrant/migrant farmworker mothers using audio-enhanced tablet computers (ATCs). We hypothesized that by using technological adaptations, we would be able to improve data capture and therefore reduce lost surveys. This Food Frequency Questionnaire (FFQ), a paper-based dietary assessment tool, was adapted for ATCs and assessed consumption of 66 food items asking 3 questions for each food item: frequency, quantity of consumption, and serving size. The tablet-based survey was audio enhanced with each question "read" to participants, accompanied by food item images, together with an embedded short instructional video. Results indicated that respondents were able to complete the 198 questions from the 66 food item FFQ on ATCs in approximately 23 minutes. Compared with paper-based FFQs, ATC-based FFQs had less missing data. Despite overall reductions in missing data by use of ATCs, respondents still appeared to have difficulty with question 2 of the FFQ. Ability to score the FFQ was dependent on what sections missing data were located. Unlike the paper-based FFQs, no ATC-based FFQs were unscored due to amount or location of missing data. An ATC-based FFQ was feasible and increased ability to score this survey on children's food patterns from migrant farmworker mothers. This adapted technology may serve as an exemplar for other non-English speaking immigrant populations.

  6. ''1/f noise'' in music: Music from 1/f noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voss, R.F.; Clarke, J.

    1978-01-01

    The spectral density of fluctuations in the audio power of many musical selections and of English speech varies approximately as 1/f (f is the frequency) down to a frequency of 5 x 10/sup -4/ Hz. This result implies that the audio-power fluctuations are correlated over all times in the same manner as ''1/f noise'' in electronic components. The frequency fluctuations of music also have a 1/f spectral density at frequencies down to the inverse of the length of the piece of music. The frequency fluctuations of English speech have a quite different behavior, with a single characteristic time of aboutmore » 0.1 s, the average length of a syllable. The observations on music suggest that 1/f noise is a good choice for stochastic composition. Compositions in which the frequency and duration of each note were determined by 1/f noise sources sounded pleasing. Those generated by white-noise sources sounded too random, while those generated by 1/f/sup 2/ noise sounded too correlated.« less

  7. Acoustic Calibration of the Exterior Effects Room at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Klos, Jacob; Chapin, William L.; Surucu, Fahri; Aumann, Aric R.

    2010-01-01

    The Exterior Effects Room (EER) at the NASA Langley Research Center is a 39-seat auditorium built for psychoacoustic studies of aircraft community noise. The original reproduction system employed monaural playback and hence lacked sound localization capability. In an effort to more closely recreate field test conditions, a significant upgrade was undertaken to allow simulation of a three-dimensional audio and visual environment. The 3D audio system consists of 27 mid and high frequency satellite speakers and 4 subwoofers, driven by a real-time audio server running an implementation of Vector Base Amplitude Panning. The audio server is part of a larger simulation system, which controls the audio and visual presentation of recorded and synthesized aircraft flyovers. The focus of this work is on the calibration of the 3D audio system, including gains used in the amplitude panning algorithm, speaker equalization, and absolute gain control. Because the speakers are installed in an irregularly shaped room, the speaker equalization includes time delay and gain compensation due to different mounting distances from the focal point, filtering for color compensation due to different installations (half space, corner, baffled/unbaffled), and cross-over filtering.

  8. Sharing the 620-790 MHz band allocated to terrestrial television with an audio-bandwidth social service satellite system

    NASA Technical Reports Server (NTRS)

    Smith, E. K.; Reinhart, E. E.

    1977-01-01

    A study was carried out to identify the optimum uplink and downlink frequencies for audio-bandwidth channels for use by a satellite system distributing social services. The study considered functional-user-need models for five types of social services and identified a general baseline system that is appropriate for most of them. Technical aspects and costs of this system and of the frequency bands that it might use were reviewed, leading to the identification of the 620-790 MHz band as a perferred candidate for both uplink and downlink transmissions for nonmobile applications. The study also led to some ideas as to how to configure the satellite system.

  9. A practical, low-noise coil system for magnetotellurics

    USGS Publications Warehouse

    Stanley, William D.; Tinkler, Richard D.

    1983-01-01

    Magnetotellurics is a geophysical technique which was developed by Cagnaird (1953) and Tikhonov (1950) and later refined by other scientists worldwide. The technique is a method of electromagnetic sounding of the Earth and is based upon the skin depth effect in conductive media. The electric and magnetic fields arising from natural sources are measured at the surface of the earth over broad frequency bands. An excellent review of the technique is provided in the paper by Vozoff (1972). The sources of the natural fields are found in two basic mechanisms. At frequencies above a few hertz, most of the energy arises from lightning in thunderstorm belts around the equatorial regions. This energy is propagated in a wave-guide formed by the earthionospheric cavity. Energy levels are higher at fundamental modes for this cavity, but sufficient energy exists over most of the audio range to be useful for sounding at these frequencies, in which case the technique is generally referred to as audio-magnetotellurics or AMT. At frequencies lower than audio, and in general below 1 Hz, the source of naturally occuring electromagnetic energy is found in ionospheric currents. Current systems flowing in the ionosphere generate EM waves which can be used in sounding of the earth. These fields generate a relatively complete spectrum of electromagnetic energy that extends from around 1 Hz to periods of one day. Figure 1 shows an amplitude spectrum characteristic of both the ionospheric and lightning sources, covering a frequency range from 0.0001 Hz to 1000 Hz. It can be seen that there is a minimum in signal levels that occurs at about 1 Hz, in the gap between the two sources, and that signal level increases with a decrease in frequency.

  10. There's More to Groove than Bass in Electronic Dance Music: Why Some People Won't Dance to Techno.

    PubMed

    Wesolowski, Brian C; Hofmann, Alex

    2016-01-01

    The purpose of this study was to explore the relationship between audio descriptors for groove-based electronic dance music (EDM) and raters' perceived cognitive, affective, and psychomotor responses. From 198 musical excerpts (length: 15 sec.) representing 11 subgenres of EDM, 19 low-level audio feature descriptors were extracted. A principal component analysis of the feature vectors indicated that the musical excerpts could effectively be classified using five complex measures, describing the rhythmical properties of: (a) the high-frequency band, (b) the mid-frequency band, and (c) the low-frequency band, as well as overall fluctuations in (d) dynamics, and (e) timbres. Using these five complex audio measures, four meaningful clusters of the EDM excerpts emerged with distinct musical attributes comprising music with: (a) isochronous bass and static timbres, (b) isochronous bass with fluctuating dynamics and rhythmical variations in the mid-frequency range, (c) non-isochronous bass and fluctuating timbres, and (d) non-isochronous bass with rhythmical variations in the high frequencies. Raters (N = 99) were each asked to respond to four musical excerpts using a four point Likert-Type scale consisting of items representing cognitive (n = 9), affective (n = 9), and psychomotor (n = 3) domains. Musical excerpts falling under the cluster of "non-isochronous bass with rhythmical variations in the high frequencies" demonstrated the overall highest composite scores as evaluated by the raters. Musical samples falling under the cluster of "isochronous bass with static timbres" demonstrated the overall lowest composite scores as evaluated by the raters. Moreover, music preference was shown to significantly affect the systematic patterning of raters' responses for those with a musical preference for "contemporary" music, "sophisticated" music, and "intense" music.

  11. Aeronautical audio broadcasting via satellite

    NASA Technical Reports Server (NTRS)

    Tzeng, Forrest F.

    1993-01-01

    A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.

  12. The roar of Yasur: Handheld audio recorder monitoring of Vanuatu volcanic vent activity

    NASA Astrophysics Data System (ADS)

    Lorenz, Ralph D.; Turtle, Elizabeth P.; Howell, Robert; Radebaugh, Jani; Lopes, Rosaly M. C.

    2016-08-01

    We describe how near-field audio recording using a pocket digital sound recorder can usefully document volcanic activity, demonstrating the approach at Yasur, Vanuatu in May 2014. Prominent emissions peak at 263 Hz, interpreted as an organ-pipe mode. High-pass filtering was found to usefully discriminate volcano vent noise from wind noise, and autocorrelation of the high pass acoustic power reveals a prominent peak in exhalation intervals of 2.5, 4 and 8 s, with a number of larger explosive events at 200 s intervals. We suggest that this compact and inexpensive audio instrumentation can usefully supplement other field monitoring such as seismic or infrasound. A simple estimate of acoustic power interpreted with a dipole jet noise model yielded vent velocities too low to be compatible with pyroclast emission, suggesting difficulties with this approach at audio frequencies (perhaps due to acoustic absorption by volcanic gases).

  13. Fall Detection Using Smartphone Audio Features.

    PubMed

    Cheffena, Michael

    2016-07-01

    An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

  14. Methodology for rheological testing of engineered biomaterials at low audio frequencies

    NASA Astrophysics Data System (ADS)

    Titze, Ingo R.; Klemuk, Sarah A.; Gray, Steven

    2004-01-01

    A commercial rheometer (Bohlin CVO120) was used to mechanically test materials that approximate vocal-fold tissues. Application is to frequencies in the low audio range (20-150 Hz). Because commercial rheometers are not specifically designed for this frequency range, a primary problem is maintaining accuracy up to (and beyond) the mechanical resonance frequency of the rotating shaft assembly. A standard viscoelastic material (NIST SRM 2490) has been used to calibrate the rheometric system for an expanded frequency range. Mathematically predicted response curves are compared to measured response curves, and an error analysis is conducted to determine the accuracy to which the elastic modulus and the shear modulus can be determined in the 20-150-Hz region. Results indicate that the inertia of the rotating assembly and the gap between the plates need to be known (or determined empirically) to a high precision when the measurement frequency exceeds the resonant frequency. In addition, a phase correction is needed to account for the magnetic inertia (inductance) of the drag cup motor. Uncorrected, the measured phase can go below the theoretical limit of -π. This can produce large errors in the viscous modulus near and above the resonance frequency. With appropriate inertia and phase corrections, +/-10% accuracy can be obtained up to twice the resonance frequency.

  15. Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.

    PubMed

    Grigoras, Catalin

    2007-04-11

    This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.

  16. Audio-Enhanced Tablet Computers to Assess Children’s Food Frequency From Migrant Farmworker Mothers

    PubMed Central

    Kilanowski, Jill F.; Trapl, Erika S.; Kofron, Ryan M.

    2014-01-01

    This study sought to improve data collection in children’s food frequency surveys for non-English speaking immigrant/migrant farmworker mothers using audio-enhanced tablet computers (ATCs). We hypothesized that by using technological adaptations, we would be able to improve data capture and therefore reduce lost surveys. This Food Frequency Questionnaire (FFQ), a paper-based dietary assessment tool, was adapted for ATCs and assessed consumption of 66 food items asking 3 questions for each food item: frequency, quantity of consumption, and serving size. The tablet-based survey was audio enhanced with each question “read” to participants, accompanied by food item images, together with an embedded short instructional video. Results indicated that respondents were able to complete the 198 questions from the 66 food item FFQ on ATCs in approximately 23 minutes. Compared with paper-based FFQs, ATC-based FFQs had less missing data. Despite overall reductions in missing data by use of ATCs, respondents still appeared to have difficulty with question 2 of the FFQ. Ability to score the FFQ was dependent on what sections missing data were located. Unlike the paper-based FFQs, no ATC-based FFQs were unscored due to amount or location of missing data. An ATC-based FFQ was feasible and increased ability to score this survey on children’s food patterns from migrant farmworker mothers. This adapted technology may serve as an exemplar for other non-English speaking immigrant populations. PMID:25343004

  17. Spatial filtering of audible sound with acoustic landscapes

    NASA Astrophysics Data System (ADS)

    Wang, Shuping; Tao, Jiancheng; Qiu, Xiaojun; Cheng, Jianchun

    2017-07-01

    Acoustic metasurfaces manipulate waves with specially designed structures and achieve properties that natural materials cannot offer. Similar surfaces work in audio frequency range as well and lead to marvelous acoustic phenomena that can be perceived by human ears. Being intrigued by the famous Maoshan Bugle phenomenon, we investigate large scale metasurfaces consisting of periodic steps of sizes comparable to the wavelength of audio frequency in both time and space domains. We propose a theoretical method to calculate the scattered sound field and find that periodic corrugated surfaces work as spatial filters and the frequency selective character can only be observed at the same side as the incident wave. The Maoshan Bugle phenomenon can be well explained with the method. Finally, we demonstrate that the proposed method can be used to design acoustical landscapes, which transform impulsive sound into famous trumpet solos or other melodious sound.

  18. Subjective evaluation and electroacoustic theoretical validation of a new approach to audio upmixing

    NASA Astrophysics Data System (ADS)

    Usher, John S.

    Audio signal processing systems for converting two-channel (stereo) recordings to four or five channels are increasingly relevant. These audio upmixers can be used with conventional stereo sound recordings and reproduced with multichannel home theatre or automotive loudspeaker audio systems to create a more engaging and natural-sounding listening experience. This dissertation discusses existing approaches to audio upmixing for recordings of musical performances and presents specific design criteria for a system to enhance spatial sound quality. A new upmixing system is proposed and evaluated according to these criteria and a theoretical model for its behavior is validated using empirical measurements. The new system removes short-term correlated components from two electronic audio signals using a pair of adaptive filters, updated according to a frequency domain implementation of the normalized-least-means-square algorithm. The major difference of the new system with all extant audio upmixers is that unsupervised time-alignment of the input signals (typically, by up to +/-10 ms) as a function of frequency (typically, using a 1024-band equalizer) is accomplished due to the non-minimum phase adaptive filter. Two new signals are created from the weighted difference of the inputs, and are then radiated with two loudspeakers behind the listener. According to the consensus in the literature on the effect of interaural correlation on auditory image formation, the self-orthogonalizing properties of the algorithm ensure minimal distortion of the frontal source imagery and natural-sounding, enveloping reverberance (ambiance) imagery. Performance evaluation of the new upmix system was accomplished in two ways: Firstly, using empirical electroacoustic measurements which validate a theoretical model of the system; and secondly, with formal listening tests which investigated auditory spatial imagery with a graphical mapping tool and a preference experiment. Both electroacoustic and subjective methods investigated system performance with a variety of test stimuli for solo musical performances reproduced using a loudspeaker in an orchestral concert-hall and recorded using different microphone techniques. The objective and subjective evaluations combined with a comparative study with two commercial systems demonstrate that the proposed system provides a new, computationally practical, high sound quality solution to upmixing.

  19. Low-cost mm-wave Doppler/FMCW transceivers for ground surveillance applications

    NASA Astrophysics Data System (ADS)

    Hansen, H. J.; Lindop, R. W.; Majstorovic, D.

    2005-12-01

    A 35 GHz Doppler CW/FMCW transceiver (Equivalent Radiated Power ERP=30dBm) has been assembled and its operation described. Both instantaneous beat signals (relating to range in FMCW mode) and Doppler signals (relating to targets moving at ~1.5 ms -1) exhibit audio frequencies. Consequently, the radar processing is provided by laptop PC using its inbuilt video-audio media system with appropriate MathWorks software. The implications of radar-on-chip developments are addressed.

  20. External unit for a semi-implantable middle ear hearing device.

    PubMed

    Garverick, S L; Kane, M; Ko, W H; Maniglia, A J

    1997-06-01

    A miniaturized, low-power external unit has been developed for the clinical trials of a semi-implantable middle ear electromagnetic hearing device (SIMEHD) which uses radio-frequency telemetry to couple sound signals to the internal unit. The external unit is based on a commercial hearing aid which provides proven audio amplification and compression. Its receiver is replaced by an application-specific integrated circuit (ASIC) which: 1) adjusts the direct-current bias of the audio input according to its peak value; 2) converts the audio signal to a one-bit digital form using sigma-delta modulation; 3) modulates the sigma-delta output with a radio-frequency (RF) oscillator; and 4) drives the external RF coil and tuning capacitor using a field-effect transistor operated in class D. The external unit functions as expected and has been used to operate bench-top tests to the SIMEHD. Measured current consumption is 1.65-2.15 mA, which projects to a battery lifetime of about 15 days. Bandwidth is 6 kHz and harmonic distortion is about 2%.

  1. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Effect of tape recording on perturbation measures.

    PubMed

    Jiang, J; Lin, E; Hanson, D G

    1998-10-01

    Tape recorders have been shown to affect measures of voice perturbation. Few studies, however, have been conducted to quantitatively justify the use or exclusion of certain types of recorders in voice perturbation studies. This study used sinusoidal and triangular waves and synthesized vowels to compare perturbation measures extracted from directly digitized signals with those recorded and played back through various tape recorders, including 3 models of digital audio tape recorders, 2 models of analog audio cassette tape recorders, and 2 models of video tape recorders. Signal contamination for frequency perturbation values was found to be consistently minimal with digital recorders (percent jitter = 0.01%-0.02%), mildly increased with video recorders (0.05%-0.10%), moderately increased with a high-quality analog audio cassette tape recorder (0.15%), and most prominent with a low-quality analog audio cassette tape recorder (0.24%). Recorder effect on amplitude perturbation measures was lowest in digital recorders (percent shimmer = 0.09%-0.20%), mildly to moderately increased in video recorders and a high-quality analog audio cassette tape recorder (0.25%-0.45%), and most prominent in a low-quality analog audio cassette tape recorder (0.98%). The effect of cassette tape material, length of spooled tape, and duration of analysis were also tested and are discussed.

  3. Astronomical component estimation (ACE v.1) by time-variant sinusoidal modeling

    NASA Astrophysics Data System (ADS)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-09-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on (fast) Fourier transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic can make it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. This drawback is circumvented by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach was proven useful to characterize audio signals (music and speech), which are non-stationary in nature. Paleoclimate proxy signals and audio signals share similar dynamics; the only difference is the frequency relationship between the different components. A harmonic-frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, this difference is irrelevant for the problem of separating simultaneous changes in amplitude and frequency. Using an approach with overlapping analysis frames, the model (Astronomical Component Estimation, version 1: ACE v.1) captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretations, whereas the latter are estimated by means of linear least-squares. As output, the model provides the orbital component waveform, either in the depth or time domain. Uncertainty analyses of the model estimates are performed using Monte Carlo simulations. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns reconstruct changes in accumulation rate, whereas amplitude modulation identifies eccentricity-modulated precession. The functioning of the time-variant sinusoidal model is illustrated and validated using a synthetic insolation signal. The new modeling approach is tested on two case studies: (1) a Pliocene-Pleistocene benthic δ18O record from Ocean Drilling Program (ODP) Site 846 and (2) a Danian magnetic susceptibility record from the Contessa Highway section, Gubbio, Italy.

  4. Hierarchical vs non-hierarchical audio indexation and classification for video genres

    NASA Astrophysics Data System (ADS)

    Dammak, Nouha; BenAyed, Yassine

    2018-04-01

    In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.

  5. Robust High-Capacity Audio Watermarking Based on FFT Amplitude Modification

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mehdi; Megías, David

    This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods.

  6. There’s More to Groove than Bass in Electronic Dance Music: Why Some People Won’t Dance to Techno

    PubMed Central

    2016-01-01

    The purpose of this study was to explore the relationship between audio descriptors for groove-based electronic dance music (EDM) and raters’ perceived cognitive, affective, and psychomotor responses. From 198 musical excerpts (length: 15 sec.) representing 11 subgenres of EDM, 19 low-level audio feature descriptors were extracted. A principal component analysis of the feature vectors indicated that the musical excerpts could effectively be classified using five complex measures, describing the rhythmical properties of: (a) the high-frequency band, (b) the mid-frequency band, and (c) the low-frequency band, as well as overall fluctuations in (d) dynamics, and (e) timbres. Using these five complex audio measures, four meaningful clusters of the EDM excerpts emerged with distinct musical attributes comprising music with: (a) isochronous bass and static timbres, (b) isochronous bass with fluctuating dynamics and rhythmical variations in the mid-frequency range, (c) non-isochronous bass and fluctuating timbres, and (d) non-isochronous bass with rhythmical variations in the high frequencies. Raters (N = 99) were each asked to respond to four musical excerpts using a four point Likert-Type scale consisting of items representing cognitive (n = 9), affective (n = 9), and psychomotor (n = 3) domains. Musical excerpts falling under the cluster of “non-isochronous bass with rhythmical variations in the high frequencies” demonstrated the overall highest composite scores as evaluated by the raters. Musical samples falling under the cluster of “isochronous bass with static timbres” demonstrated the overall lowest composite scores as evaluated by the raters. Moreover, music preference was shown to significantly affect the systematic patterning of raters’ responses for those with a musical preference for “contemporary” music, “sophisticated” music, and “intense” music. PMID:27798645

  7. Two-tone suppression in the cricket, Eunemobius carolinus (Gryllidae, Nemobiinae)

    NASA Astrophysics Data System (ADS)

    Farris, Hamilton E.; Hoy, Ronald R.

    2002-03-01

    Sounds with frequencies >15 kHz elicit an acoustic startle response (ASR) in flying crickets (Eunemobius carolinus). Although frequencies <15 kHz do not elicit the ASR when presented alone, when presented with ultrasound (40 kHz), low-frequency stimuli suppress the ultrasound-induced startle. Thus, using methods similar to those in masking experiments, we used two-tone suppression to assay sensitivity to frequencies in the audio band. Startle suppression was tuned to frequencies near 5 kHz, the frequency range of male calling songs. Similar to equal loudness contours measured in humans, however, equal suppression contours were not parallel, as the equivalent rectangular bandwidth of suppression tuning changed with increases in ultrasound intensity. Temporal integration of suppressor stimuli was measured using nonsimultaneous presentations of 5-ms pulses of 6 and 40 kHz. We found that no suppression occurs when the suppressing tone is >2 ms after and >5 ms before the ultrasound stimulus, suggesting that stimulus overlap is a requirement for suppression. When considered together with our finding that the intensity of low-frequency stimuli required for suppression is greater than that produced by singing males, the overlap requirement suggests that two-tone suppression functions to limit the ASR to sounds containing only ultrasound and not to broadband sounds that span the audio and ultrasound range.

  8. The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games

    NASA Astrophysics Data System (ADS)

    Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill

    2013-03-01

    Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.

  9. Fundamentals of dielectric properties measurements and agricultural applications.

    PubMed

    Nelson, Stuart O

    2010-01-01

    Dielectrics and dielectric properties are defined generally and dielectric measurement methods and equipment are described for various frequency ranges from audio frequencies through microwave frequencies. These include impedance and admittance bridges, resonant frequency, transmission-line, and free-space methods in the frequency domain and time-domain and broadband techniques. Many references are cited describing methods in detail and giving sources of dielectric properties data. Finally a few applications for such data are presented and sources of tabulated and dielectric properties data bases are identified.

  10. Vocalisation sound pattern identification in young broiler chickens.

    PubMed

    Fontana, I; Tullo, E; Scrase, A; Butterworth, A

    2016-09-01

    In this study, we describe the monitoring of young broiler chicken vocalisation, with sound recorded and assessed at regular intervals throughout the life of the birds from day 1 to day 38, with a focus on the first week of life. We assess whether there are recognisable, and even predictable, vocalisation patterns based on frequency and sound spectrum analysis, which can be observed in birds at different ages and stages of growth within the relatively short life of the birds in commercial broiler production cycles. The experimental trials were carried out in a farm where the broiler where reared indoor, and audio recording procedures carried out over 38 days. The recordings were made using two microphones connected to a digital recorder, and the sonic data was collected in situations without disturbance of the animals beyond that created by the routine activities of the farmer. Digital files of 1 h duration were cut into short files of 10 min duration, and these sound recordings were analysed and labelled using audio analysis software. Analysis of these short sound files showed that the key vocalisation frequency and patterns changed in relation to increasing age and the weight of the broilers. Statistical analysis showed a significant correlation (P<0.001) between the frequency of vocalisation and the age of the birds. Based on the identification of specific frequencies of the sounds emitted, in relation to age and weight, it is proposed that there is potential for audio monitoring and comparison with 'anticipated' sound patterns to be used to evaluate the status of farmed broiler chicken.

  11. Apparatus for providing sensory substitution of force feedback

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J. (Inventor); Sheridan, Thomas B. (Inventor)

    1995-01-01

    A feedback apparatus for an operator to control an effector that is remote from the operator to interact with a remote environment has a local input device to be manipulated by the operator. Sensors in the effector's environment are capable of sensing the amplitude of forces arising between the effector and its environment, the direction of application of such forces, or both amplitude and direction. A feedback signal corresponding to such a component of the force, is generated and transmitted to the environment of the operator. The signal is transduced into an auditory sensory substitution signal to which the operator is sensitive. Sound production apparatus present the auditory signal to the operator. The full range of the force amplitude may be represented by a single, audio speaker. Auditory display elements may be stereo headphones or free standing audio speakers, numbering from one to many more than two. The location of the application of the force may also be specified by the location of audio speakers that generate signals corresponding to specific forces. Alternatively, the location may be specified by the frequency of an audio signal, or by the apparent location of an audio signal, as simulated by a combination of signals originating at different locations.

  12. 47 CFR 90.242 - Travelers' information stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the modulation limiter and the modulated stage. At audio frequencies between 3 kHz and 20 kHz this...' information stations. (a) The frequencies 530 through 1700 kHz in 10 kHz increments may be assigned to the... consideration of possible cross-modulation and inter-modulation interference effects which may result from the...

  13. Measurement techniques for the characterization in the frequency domain of regulated energy-storage DC-to-DC converters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bahler, D. D.

    1978-01-01

    Procedures are presented for obtaining valid frequency-domain transfer functions of regulated reactor energy-storage dc-to-dc converters. These procedures are for measuring loop gain, closed loop gain, output impedance, and audio susceptibility. The applications of these measurements are discussed.

  14. 47 CFR 15.121 - Scanning receivers and frequency converters used with scanning receivers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... transmissions to analog voice audio. (2) Be designed so that the tuning, control and filtering circuitry is inaccessible. The design must be such that any attempts to modify the equipment to receive transmissions from... Radiotelephone Service transmissions. (e) Scanning receivers and frequency converters designed for use with...

  15. 47 CFR 15.121 - Scanning receivers and frequency converters used with scanning receivers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... transmissions to analog voice audio. (2) Be designed so that the tuning, control and filtering circuitry is inaccessible. The design must be such that any attempts to modify the equipment to receive transmissions from... Radiotelephone Service transmissions. (e) Scanning receivers and frequency converters designed for use with...

  16. 47 CFR 15.121 - Scanning receivers and frequency converters used with scanning receivers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... transmissions to analog voice audio. (2) Be designed so that the tuning, control and filtering circuitry is inaccessible. The design must be such that any attempts to modify the equipment to receive transmissions from... Radiotelephone Service transmissions. (e) Scanning receivers and frequency converters designed for use with...

  17. 47 CFR 15.121 - Scanning receivers and frequency converters used with scanning receivers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... transmissions to analog voice audio. (2) Be designed so that the tuning, control and filtering circuitry is inaccessible. The design must be such that any attempts to modify the equipment to receive transmissions from... Radiotelephone Service transmissions. (e) Scanning receivers and frequency converters designed for use with...

  18. Musical examination to bridge audio data and sheet music

    NASA Astrophysics Data System (ADS)

    Pan, Xunyu; Cross, Timothy J.; Xiao, Liangliang; Hei, Xiali

    2015-03-01

    The digitalization of audio is commonly implemented for the purpose of convenient storage and transmission of music and songs in today's digital age. Analyzing digital audio for an insightful look at a specific musical characteristic, however, can be quite challenging for various types of applications. Many existing musical analysis techniques can examine a particular piece of audio data. For example, the frequency of digital sound can be easily read and identified at a specific section in an audio file. Based on this information, we could determine the musical note being played at that instant, but what if you want to see a list of all the notes played in a song? While most existing methods help to provide information about a single piece of the audio data at a time, few of them can analyze the available audio file on a larger scale. The research conducted in this work considers how to further utilize the examination of audio data by storing more information from the original audio file. In practice, we develop a novel musical analysis system Musicians Aid to process musical representation and examination of audio data. Musicians Aid solves the previous problem by storing and analyzing the audio information as it reads it rather than tossing it aside. The system can provide professional musicians with an insightful look at the music they created and advance their understanding of their work. Amateur musicians could also benefit from using it solely for the purpose of obtaining feedback about a song they were attempting to play. By comparing our system's interpretation of traditional sheet music with their own playing, a musician could ensure what they played was correct. More specifically, the system could show them exactly where they went wrong and how to adjust their mistakes. In addition, the application could be extended over the Internet to allow users to play music with one another and then review the audio data they produced. This would be particularly useful for teaching music lessons on the web. The developed system is evaluated with songs played with guitar, keyboard, violin, and other popular musical instruments (primarily electronic or stringed instruments). The Musicians Aid system is successful at both representing and analyzing audio data and it is also powerful in assisting individuals interested in learning and understanding music.

  19. Multimodal audio guide for museums and exhibitions

    NASA Astrophysics Data System (ADS)

    Gebbensleben, Sandra; Dittmann, Jana; Vielhauer, Claus

    2006-02-01

    In our paper we introduce a new Audio Guide concept for exploring buildings, realms and exhibitions. Actual proposed solutions work in most cases with pre-defined devices, which users have to buy or borrow. These systems often go along with complex technical installations and require a great degree of user training for device handling. Furthermore, the activation of audio commentary related to the exhibition objects is typically based on additional components like infrared, radio frequency or GPS technology. Beside the necessity of installation of specific devices for user location, these approaches often only support automatic activation with no or limited user interaction. Therefore, elaboration of alternative concepts appears worthwhile. Motivated by these aspects, we introduce a new concept based on usage of the visitor's own mobile smart phone. The advantages in our approach are twofold: firstly the Audio Guide can be used in various places without any purchase and extensive installation of additional components in or around the exhibition object. Secondly, the visitors can experience the exhibition on individual tours only by uploading the Audio Guide at a single point of entry, the Audio Guide Service Counter, and keeping it on her or his personal device. Furthermore, since the user usually is quite familiar with the interface of her or his phone and can thus interact with the application device easily. Our technical concept makes use of two general ideas for location detection and activation. Firstly, we suggest an enhanced interactive number based activation by exploiting the visual capabilities of modern smart phones and secondly we outline an active digital audio watermarking approach, where information about objects are transmitted via an analog audio channel.

  20. Digital Audio Radio Broadcast Systems Laboratory Testing Nearly Complete

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Radio history continues to be made at the NASA Lewis Research Center with the completion of phase one of the digital audio radio (DAR) testing conducted by the Consumer Electronics Group of the Electronic Industries Association. This satellite, satellite/terrestrial, and terrestrial digital technology will open up new audio broadcasting opportunities both domestically and worldwide. It will significantly improve the current quality of amplitude-modulated/frequency-modulated (AM/FM) radio with a new digitally modulated radio signal and will introduce true compact-disc-quality (CD-quality) sound for the first time. Lewis is hosting the laboratory testing of seven proposed digital audio radio systems and modes. Two of the proposed systems operate in two modes each, making a total of nine systems being tested. The nine systems are divided into the following types of transmission: in-band on-channel (IBOC), in-band adjacent-channel (IBAC), and new bands. The laboratory testing was conducted by the Consumer Electronics Group of the Electronic Industries Association. Subjective assessments of the audio recordings for each of the nine systems was conducted by the Communications Research Center in Ottawa, Canada, under contract to the Electronic Industries Association. The Communications Research Center has the only CCIR-qualified (Consultative Committee for International Radio) audio testing facility in North America. The main goals of the U.S. testing process are to (1) provide technical data to the Federal Communication Commission (FCC) so that it can establish a standard for digital audio receivers and transmitters and (2) provide the receiver and transmitter industries with the proper standards upon which to build their equipment. In addition, the data will be forwarded to the International Telecommunications Union to help in the establishment of international standards for digital audio receivers and transmitters, thus allowing U.S. manufacturers to compete in the world market.

  1. Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.

    PubMed

    Asadpour, Vahid; Towhidkhah, Farzad; Homayounpour, Mohammad Mehdi

    2006-10-01

    Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.

  2. Satellite sound broadcasting system, portable reception

    NASA Technical Reports Server (NTRS)

    Golshan, Nasser; Vaisnys, Arvydas

    1990-01-01

    Studies are underway at JPL in the emerging area of Satellite Sound Broadcast Service (SSBS) for direct reception by low cost portable, semi portable, mobile and fixed radio receivers. This paper addresses the portable reception of digital broadcasting of monophonic audio with source material band limited to 5 KHz (source audio comparable to commercial AM broadcasting). The proposed system provides transmission robustness, uniformity of performance over the coverage area and excellent frequency reuse. Propagation problems associated with indoor portable reception are considered in detail and innovative antenna concepts are suggested to mitigate these problems. It is shown that, with the marriage of proper technologies a single medium power satellite can provide substantial direct satellite audio broadcast capability to CONUS in UHF or L Bands, for high quality portable indoor reception by low cost radio receivers.

  3. Measurement and Modeling of Narrowband Channels for Ultrasonic Underwater Communications

    PubMed Central

    Cañete, Francisco J.; López-Fernández, Jesús; García-Corrales, Celia; Sánchez, Antonio; Robles, Encarnación; Rodrigo, Francisco J.; Paris, José F.

    2016-01-01

    Underwater acoustic sensor networks are a promising technology that allow real-time data collection in seas and oceans for a wide variety of applications. Smaller size and weight sensors can be achieved with working frequencies shifted from audio to the ultrasonic band. At these frequencies, the fading phenomena has a significant presence in the channel behavior, and the design of a reliable communication link between the network sensors will require a precise characterization of it. Fading in underwater channels has been previously measured and modeled in the audio band. However, there have been few attempts to study it at ultrasonic frequencies. In this paper, a campaign of measurements of ultrasonic underwater acoustic channels in Mediterranean shallow waters conducted by the authors is presented. These measurements are used to determine the parameters of the so-called κ-μ shadowed distribution, a fading model with a direct connection to the underlying physical mechanisms. The model is then used to evaluate the capacity of the measured channels with a closed-form expression. PMID:26907281

  4. Mapping the magnetic field vector in a fountain clock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gertsvolf, Marina; Marmet, Louis

    2011-12-15

    We show how the mapping of the magnetic field vector components can be achieved in a fountain clock by measuring the Larmor transition frequency in atoms that are used as a spatial probe. We control two vector components of the magnetic field and apply audio frequency magnetic pulses to localize and measure the field vector through Zeeman spectroscopy.

  5. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    PubMed Central

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718

  6. Music and speech listening enhance the recovery of early sensory processing after stroke.

    PubMed

    Särkämö, Teppo; Pihko, Elina; Laitinen, Sari; Forsblom, Anita; Soinila, Seppo; Mikkonen, Mikko; Autti, Taina; Silvennoinen, Heli M; Erkkilä, Jaakko; Laine, Matti; Peretz, Isabelle; Hietanen, Marja; Tervaniemi, Mari

    2010-12-01

    Our surrounding auditory environment has a dramatic influence on the development of basic auditory and cognitive skills, but little is known about how it influences the recovery of these skills after neural damage. Here, we studied the long-term effects of daily music and speech listening on auditory sensory memory after middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 patients who had middle cerebral artery stroke were randomly assigned to a music listening group, an audio book listening group, or a control group. Auditory sensory memory, as indexed by the magnetic MMN (MMNm) response to changes in sound frequency and duration, was measured 1 week (baseline), 3 months, and 6 months after the stroke with whole-head magnetoencephalography recordings. Fifty-four patients completed the study. Results showed that the amplitude of the frequency MMNm increased significantly more in both music and audio book groups than in the control group during the 6-month poststroke period. In contrast, the duration MMNm amplitude increased more in the audio book group than in the other groups. Moreover, changes in the frequency MMNm amplitude correlated significantly with the behavioral improvement of verbal memory and focused attention induced by music listening. These findings demonstrate that merely listening to music and speech after neural damage can induce long-term plastic changes in early sensory processing, which, in turn, may facilitate the recovery of higher cognitive functions. The neural mechanisms potentially underlying this effect are discussed.

  7. Patient training in respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kini, Vijay R.; Vedam, Subrahmanya S.; Keall, Paul J.

    2003-03-31

    Respiratory gating is used to counter the effects of organ motion during radiotherapy for chest tumors. The effects of variations in patient breathing patterns during a single treatment and from day to day are unknown. We evaluated the feasibility of using patient training tools and their effect on the breathing cycle regularity and reproducibility during respiratory-gated radiotherapy. To monitor respiratory patterns, we used a component of a commercially available respiratory-gated radiotherapy system (Real Time Position Management (RPM) System, Varian Oncology Systems, Palo Alto, CA 94304). This passive marker video tracking system consists of reflective markers placed on the patient's chestmore » or abdomen, which are detected by a wall-mounted video camera. Software installed on a PC interfaced to this camera detects the marker motion digitally and records it. The marker position as a function of time serves as the motion signal that may be used to trigger imaging or treatment. The training tools used were audio prompting and visual feedback, with free breathing as a control. The audio prompting method used instructions to 'breathe in' or 'breathe out' at periodic intervals deduced from patients' own breathing patterns. In the visual feedback method, patients were shown a real-time trace of their abdominal wall motion due to breathing. Using this, they were asked to maintain a constant amplitude of motion. Motion traces of the abdominal wall were recorded for each patient for various maneuvers. Free breathing showed a variable amplitude and frequency. Audio prompting resulted in a reproducible frequency; however, the variability and the magnitude of amplitude increased. Visual feedback gave a better control over the amplitude but showed minor variations in frequency. We concluded that training improves the reproducibility of amplitude and frequency of patient breathing cycles. This may increase the accuracy of respiratory-gated radiation therapy.« less

  8. Viscoelastic properties of three vocal-fold injectable biomaterials at low audio frequencies.

    PubMed

    Klemuk, Sarah A; Titze, Ingo R

    2004-09-01

    Previous measurements of viscoelastic properties of Zyderm were to be extended to low audio frequencies, and properties of two other biomaterials not previously measured, thiolated hyaluronic acid (HA-DTPH) and Cymetra, were obtained. Rheologic investigation. Oscillatory shear stress was applied to each sample using a controlled stress rheometer at frequencies between 0.01 and 100 Hz with a parallel plate apparatus. Versuscoelastic moduli were recorded at each frequency. The calculated resonance frequency of the machine and sample were then used to determine the maximum frequency at which reliable data existed. Extrapolation functions were fit to viscoelastic parameters, which predicted the properties up to 1,000 Hz. Frequency trends of Zyderm were similar to those previously reported, whereas magnitudes were different. The elastic moduli logarithmically increased with frequency, whereas dynamic viscosity demonstrated shear thinning, a condition of primary importance for humans to vocalize over a broad frequency range. Previous measurements were extended from 15 Hz up to 74 Hz. Differences in magnitude between a previous study and the present study were attributed to particulate orientation during testing. Cymetra was found to have nearly identical viscoelastic properties to those of bovine collagen, both in magnitude and frequency trend, with reliable measures extending up to 81 Hz. Rheologic properties of the hyaluronic acid gel were the closest match to cadaveric vocal fold mucosa in magnitude and frequency trend. Viscoelastic properties of Cymetra and Zyderm are nearly the same and are significantly greater than those of vocal fold mucosa. HA-DTPH possesses a good viscoelastic match to vocal fold mucosa and may be useful in future lamina propria repair.

  9. Coexistence issues for a 2.4 GHz wireless audio streaming in presence of bluetooth paging and WLAN

    NASA Astrophysics Data System (ADS)

    Pfeiffer, F.; Rashwan, M.; Biebl, E.; Napholz, B.

    2015-11-01

    Nowadays, customers expect to integrate their mobile electronic devices (smartphones and laptops) in a vehicle to form a wireless network. Typically, IEEE 802.11 is used to provide a high-speed wireless local area network (WLAN) and Bluetooth is used for cable replacement applications in a wireless personal area network (PAN). In addition, Daimler uses KLEER as third wireless technology in the unlicensed (UL) 2.4 GHz-ISM-band to transmit full CD-quality digital audio. As Bluetooth, IEEE 802.11 and KLEER are operating in the same frequency band, it has to be ensured that all three technologies can be used simultaneously without interference. In this paper, we focus on the impact of Bluetooth and IEEE 802.11 as interferer in presence of a KLEER audio transmission.

  10. Perceptually controlled doping for audio source separation

    NASA Astrophysics Data System (ADS)

    Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT

    2014-12-01

    The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.

  11. Second Language Vocabulary Learning through Extensive Reading with Audio Support: How Do Frequency and Distribution of Occurrence Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Chang, Anna C-S.

    2015-01-01

    This study investigated (1) the extent of vocabulary learning through reading and listening to 10 graded readers, and (2) the relationship between vocabulary gain and the frequency and distribution of occurrence of 100 target words in the graded readers. The experimental design expanded on earlier studies that have typically examined incidental…

  12. Numerical Simulation of Response Characteristics of Audio-magnetotelluric for Gas Hydrate in the Qilian Mountain Permafrost, China

    NASA Astrophysics Data System (ADS)

    Xiao, Kun; Zou, Changchun; Yu, Changqing; Pi, Jinyun

    2015-10-01

    Audio-magnetotelluric (AMT) method is a kind of frequency-domain sounding technique, which can be applied to gas hydrate prospecting and assessments in the permafrost region due to its high frequency band. Based on the geological conditions of gas hydrate reservoir in the Qilian Mountain permafrost, by establishing high-resistance abnormal model for gas hydrate and carrying out numerical simulation using finite element method (FEM) and nonlinear conjugate gradient (NLCG) method, this paper analyzed the application range of AMT method and the best acquisition parameters setting scheme. When porosity of gas hydrate reservoir is less than 5%, gas hydrate saturation is greater than 70%, occurrence scale is less than 50 m, or bury depth is greater than 500 m, AMT technique cannot identify and delineate the favorable gas hydrate reservoir. Survey line should be more than twice the length of probable occurrence scale, while tripling the length will make the best result. The number of stations should be no less than 6, and 11 stations are optimal. At the high frequency section (10˜1000 Hz), there should be no less than 3 frequency points, 4 being the best number.

  13. Frequency shifting approach towards textual transcription of heartbeat sounds.

    PubMed

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  14. Quantitative characterisation of audio data by ordinal symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Aschenbrenner, T.; Monetti, R.; Amigó, J. M.; Bunk, W.

    2013-06-01

    Ordinal symbolic dynamics has developed into a valuable method to describe complex systems. Recently, using the concept of transcripts, the coupling behaviour of systems was assessed, combining the properties of the symmetric group with information theoretic ideas. In this contribution, methods from the field of ordinal symbolic dynamics are applied to the characterisation of audio data. Coupling complexity between frequency bands of solo violin music, as a fingerprint of the instrument, is used for classification purposes within a support vector machine scheme. Our results suggest that coupling complexity is able to capture essential characteristics, sufficient to distinguish among different violins.

  15. Time-frequency analysis of acoustic signals in the audio-frequency range generated during Hadfield's steel friction

    NASA Astrophysics Data System (ADS)

    Dobrynin, S. A.; Kolubaev, E. A.; Smolin, A. Yu.; Dmitriev, A. I.; Psakhie, S. G.

    2010-07-01

    Time-frequency analysis of sound waves detected by a microphone during the friction of Hadfield’s steel has been performed using wavelet transform and window Fourier transform methods. This approach reveals a relationship between the appearance of quasi-periodic intensity outbursts in the acoustic response signals and the processes responsible for the formation of wear products. It is shown that the time-frequency analysis of acoustic emission in a tribosystem can be applied, along with traditional approaches, to studying features in the wear and friction process.

  16. Egalisation adaptative et non invasive de la reponse temps-frequence d'une petite salle

    NASA Astrophysics Data System (ADS)

    Martin, Tristan

    In this research, we are interested in sound, environment wherein it propagates, the interaction between the sound wave and a transmission channel, and the changes induced by the components of an audio chain. The specific context studied is that of listening to music on loudspeakers. For the environment in which sound wave propagates, like for any transmission channel, there are mathematical functions used to characterize the changes induced by a channel on the signal therethrough. An electric signal serves as a input for a system, in this case consisting of an amplifier, a loudspeaker, and the room where the listening takes place, which according to its characteristics, returns as an output at the listening position, an altered sound wave. Frequency response, impulse response, transfer function, the mathematics used are no different from those used commonly for the characterization of a transmission channel or the expression of the outputs of a linear system to its inputs. Naturally, there is a purpose to this modeling exercise: getting the frequency response of the amplifier/loundspeaker/room chain makes possible its equalization. It is common in many contexts of listening to find a filter inserted into the audio chain between the source (Eg CD player) and the amplifier/loudspeaker that converts the electrical signal to an acoustic signal propagated in the room. This filter, called "equalizer" is intended to compensate the frequency effect of the components of the audio chain and the room on the sound signal that will be transmitted. Properties for designing this filter are derived from those of the audio chain. Although analytically rigorous, physical approach, focusing on physical modeling of the loudspeaker and the propagation equation of the acoustic wave is ill-suited to rooms with complex geometry and changing over time. The second approach, experimental modeling, and therefore that addressed in this work, ignores physical properties. The system audio chain is rather seen as a "black box" including inputs and outputs. The problem studied is the characterization of an electro-acoustic system as having a single input signal transmitted through a speaker in a room, and a single output signal picked up by a microphone at the listening position. The originality of this work lies not only in the technique developed to arrive at this characterization, but especially in the constraints imposed in order to get there. The majority of technics documented to this date involve using excitation signals dedicated the measure; signals with favorable characteristics to simplify the calculation of the impulse response of the audio chain. Known signals are played through a loudspeaker and the room's response to excitation is captured with a microphone at the listening position. The measurement exercise itself poses problem, especially when there is an audience in the room. Also, the response of the room may change between the time of the measurement and time of listening. If the room is reconfigured for example, a curtain is pulled or the stage moved. In the case of a theater, the speaker used may vary depending on the context. A survey of work in which solutions to this problem are suggested was made. The main objective is to develop an innovative method to capture the impulse response of an audio chain without the knowledge of the audience. To do this, no signal dedicated to the measurement should be used. The developed method allows the capture of the electro-acoustic impulse response exploiting only the music signals when it comes to a concert hall or using a movie sound track when a movie is a movie theater. As a result, an algorithm for modeling dynamicly and continuously the response of a room. A finite impulse response filter acting as a digital equalizer must be designed and also able to dynamically adapt the behavior of the room, even when it varies over time. A multi spectral resolution method is used to build, for diffrent frequency bands, the filter response arising from the inversion of the room/speaker frequency response. The resulting dynamically adapting filter has properties similar to those of the human ear, a significant spectral-resolution in lower frequencies, and high time-resolution at high frequencies. The response corrected by the filter system tends approaching to a pure pulse. Techniques explored in the context of this research led to the publication of a scientific article in a peer reviewed journal and one conference paper in which similar methods were used for mining engineering applications. (Abstract shortened by UMI.).

  17. Recognition and characterization of unstructured environmental sounds

    NASA Astrophysics Data System (ADS)

    Chu, Selina

    2011-12-01

    Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to exploit and label new unlabeled audio data. The final components of my thesis will involve investigating on learning sound structures for generalization and applying the proposed ideas to context aware applications. The inherent nature of environmental sound is noisy and contains relatively large amounts of overlapping events between different environments. Environmental sounds contain large variances even within a single environment type, and frequently, there are no divisible or clear boundaries between some types. Traditional methods of classification are generally not robust enough to handle classes with overlaps. This audio, hence, requires representation by complex models. Using deep learning architecture provides a way to obtain a generative model-based method for classification. Specifically, I considered the use of Deep Belief Networks (DBNs) to model environmental audio and investigate its applicability with noisy data to improve robustness and generalization. A framework was proposed using composite-DBNs to discover high-level representations and to learn a hierarchical structure for different acoustic environments in a data-driven fashion. Experimental results on real data sets demonstrate its effectiveness over traditional methods with over 90% accuracy on recognition for a high number of environmental sound types.

  18. Enhancement of Signal-to-noise Ratio in Natural-source Transient Magnetotelluric Data with Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Paulson, K. V.

    For audio-frequency magnetotelluric surveys where the signals are lightning-stroke transients, the conventional Fourier transform method often fails to produce a high quality impedance tensor. An alternative approach is to use the wavelet transform method which is capable of localizing target information simultaneously in both the temporal and frequency domains. Unlike Fourier analysis that yields an average amplitude and phase, the wavelet transform produces an instantaneous estimate of the amplitude and phase of a signal. In this paper a complex well-localized wavelet, the Morlet wavelet, has been used to transform and analyze audio-frequency magnetotelluric data. With the Morlet wavelet, the magnetotelluric impedance tensor can be computed directly in the wavelet transform domain. The lightning-stroke transients are easily identified on the dilation-translation plane. Choosing those wavelet transform values where the signals are located, a higher signal-to-noise ratio estimation of the impedance tensor can be obtained. In a test using real data, the wavelet transform showed a significant improvement in the signal-to-noise ratio over the conventional Fourier transform.

  19. The Quality and Frequency of Mother-Toddler Conflict: Links with Attachment and Temperament

    ERIC Educational Resources Information Center

    Laible, Deborah; Panfile, Tia; Makariev, Drika

    2008-01-01

    The goal of this study was to examine the links among attachment, child temperament, and the quality and frequency of mother-toddler conflict. Sixty-four mothers and children took part in a series of laboratory tasks when the child was 30 months of age and an audio-recorded home observation when the child was 36 months of age. All episodes of…

  20. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    DTIC Science & Technology

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  1. Rovers Eye View of 3 Year Trek on Mars

    NASA Image and Video Library

    2010-06-11

    309 images of the Martian horizon taken during 13-mile journey from Victoria crater to Endeavour crater. Numbers at top left are Martian day numbers (sols). Audio comes from rover accelerometer data adjusted to an audible frequency.

  2. Inductive Interference in Rapid Transit Signaling Systems. Volume 1. Theory and Background.

    DOT National Transportation Integrated Search

    1986-05-01

    This report describes the mechanism of inductive interference to audio frequency (AF) signaling systems used in rail transit operations, caused by rail transit vehicles with chopper propulsion control. Choppers are switching circuits composed of high...

  3. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  4. Formal Verification of a Power Controller Using the Real-Time Model Checker UPPAAL

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Larsen, Kim Guldstrand; Skou, Arne

    1999-01-01

    A real-time system for power-down control in audio/video components is modeled and verified using the real-time model checker UPPAAL. The system is supposed to reside in an audio/video component and control (read from and write to) links to neighbor audio/video components such as TV, VCR and remote-control. In particular, the system is responsible for the powering up and down of the component in between the arrival of data, and in order to do so in a safe way without loss of data, it is essential that no link interrupts are lost. Hence, a component system is a multitasking system with hard real-time requirements, and we present techniques for modeling time consumption in such a multitasked, prioritized system. The work has been carried out in a collaboration between Aalborg University and the audio/video company B&O. By modeling the system, 3 design errors were identified and corrected, and the following verification confirmed the validity of the design but also revealed the necessity for an upper limit of the interrupt frequency. The resulting design has been implemented and it is going to be incorporated as part of a new product line.

  5. Securing Digital Audio using Complex Quadratic Map

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  6. Long-Term Animal Observation by Wireless Sensor Networks with Sound Recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ning-Han; Wu, Chen-An; Hsieh, Shu-Ju

    Due to wireless sensor networks can transmit data wirelessly and can be disposed easily, they are used in the wild to monitor the change of environment. However, the lifetime of sensor is limited by the battery, especially when the monitored data type is audio, the lifetime is very short due to a huge amount of data transmission. By intuition, sensor mote analyzes the sensed data and decides not to deliver them to server that can reduce the expense of energy. Nevertheless, the ability of sensor mote is not powerful enough to work on complicated methods. Therefore, it is an urgent issue to design a method to keep analyzing speed and accuracy under the restricted memory and processor. This research proposed an embedded audio processing module in the sensor mote to extract and analyze audio features in advance. Then, through the estimation of likelihood of observed animal sound by the frequencies distribution, only the interesting audio data are sent back to server. The prototype of WSN system is built and examined in the wild to observe frogs. According to the results of experiments, the energy consumed by sensors through our method can be reduced effectively to prolong the observing time of animal detecting sensors.

  7. 76 FR 17613 - Aviation Service Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-30

    ...) regarding audio visual warning systems (AVWS). OCAS, Inc. installs such technology under the trademark OCAS... frequencies to activate obstruction lighting and transmit audible warnings to aircraft on a potential... transmit audible warnings to pilots. We seek comment on operational, licensing, eligibility and equipment...

  8. Seismic intrusion detector system

    DOEpatents

    Hawk, Hervey L.; Hawley, James G.; Portlock, John M.; Scheibner, James E.

    1976-01-01

    A system for monitoring man-associated seismic movements within a control area including a geophone for generating an electrical signal in response to seismic movement, a bandpass amplifier and threshold detector for eliminating unwanted signals, pulse counting system for counting and storing the number of seismic movements within the area, and a monitoring system operable on command having a variable frequency oscillator generating an audio frequency signal proportional to the number of said seismic movements.

  9. Multifunction waveform generator for EM receiver testing

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Jin, Sheng; Deng, Ming

    2018-01-01

    In many electromagnetic (EM) methods - such as magnetotelluric, spectral-induced polarization (SIP), time-domain-induced polarization (TDIP), and controlled-source audio magnetotelluric (CSAMT) methods - it is important to evaluate and test the EM receivers during their development stage. To assess the performance of the developed EM receivers, controlled synthetic data that simulate the observed signals in different modes are required. In CSAMT and SIP mode testing, the waveform generator should use the GPS time as the reference for repeating schedule. Based on our testing, the frequency range, frequency precision, and time synchronization of the currently available function waveform generators on the market are deficient. This paper presents a multifunction waveform generator with three waveforms: (1) a wideband, low-noise electromagnetic field signal to be used for magnetotelluric, audio-magnetotelluric, and long-period magnetotelluric studies; (2) a repeating frequency sweep square waveform for CSAMT and SIP studies; and (3) a positive-zero-negative-zero signal that contains primary and secondary fields for TDIP studies. In this paper, we provide the principles of the above three waveforms along with a hardware design for the generator. Furthermore, testing of the EM receiver was conducted with the waveform generator, and the results of the experiment were compared with those calculated from the simulation and theory in the frequency band of interest.

  10. Comparison of level discrimination, increment detection, and comodulation masking release in the audio- and envelope-frequency domains

    PubMed Central

    Nelson, Paul C.; Ewert, Stephan D.; Carney, Laurel H.; Dau, Torsten

    2008-01-01

    In general, the temporal structure of stimuli must be considered to account for certain observations made in detection and masking experiments in the audio-frequency domain. Two such phenomena are (1) a heightened sensitivity to amplitude increments with a temporal fringe compared to gated level discrimination performance and (2) lower tone-in-noise detection thresholds using a modulated masker compared to those using an unmodulated masker. In the current study, translations of these two experiments were carried out to test the hypothesis that analogous cues might be used in the envelope-frequency domain. Pure-tone carrier amplitude-modulation (AM) depth-discrimination thresholds were found to be similar using both traditional gated stimuli and using a temporally modulated fringe for a fixed standard depth (ms=0.25) and a range of AM frequencies (4-64 Hz). In a second experiment, masked sinusoidal AM detection thresholds were compared in conditions with and without slow and regular fluctuations imposed on the instantaneous masker AM depth. Release from masking was obtained only for very slow masker fluctuations (less than 2 Hz). A physiologically motivated model that effectively acts as a first-order envelope change detector accounted for several, but not all, of the key aspects of the data. PMID:17471731

  11. Doppler radar flowmeter

    DOEpatents

    Petlevich, Walter J.; Sverdrup, Edward F.

    1978-01-01

    A Doppler radar flowmeter comprises a transceiver which produces an audio frequency output related to the Doppler shift in frequency between radio waves backscattered from particulate matter carried in a fluid and the radiated radio waves. A variable gain amplifier and low pass filter are provided for amplifying and filtering the transceiver output. A frequency counter having a variable triggering level is also provided to determine the magnitude of the Doppler shift. A calibration method is disclosed wherein the amplifier gain and frequency counter trigger level are adjusted to achieve plateaus in the output of the frequency counter and thereby allow calibration without the necessity of being able to visually observe the flow.

  12. Establishing a gold standard for manual cough counting: video versus digital audio recordings

    PubMed Central

    Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A

    2006-01-01

    Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019

  13. Efficient audio signal processing for embedded systems

    NASA Astrophysics Data System (ADS)

    Chiu, Leung Kin

    As mobile platforms continue to pack on more computational power, electronics manufacturers start to differentiate their products by enhancing the audio features. However, consumers also demand smaller devices that could operate for longer time, hence imposing design constraints. In this research, we investigate two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound ”richer" and "fuller." Piezoelectric speakers have a small form factor but exhibit poor response in the low-frequency region. In the algorithm, we combine psychoacoustic bass extension and dynamic range compression to improve the perceived bass coming out from the tiny speakers. We also developed an audio energy reduction algorithm for loudspeaker power management. The perceptually transparent algorithm extends the battery life of mobile devices and prevents thermal damage in speakers. This method is similar to audio compression algorithms, which encode audio signals in such a ways that the compression artifacts are not easily perceivable. Instead of reducing the storage space, however, we suppress the audio contents that are below the hearing threshold, therefore reducing the signal energy. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The system is an example of an analog-to-information converter. The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. In this classifier architecture, we combine simple "base" analog classifiers to form a strong one. We also designed the circuits to implement the AdaBoost-based analog classifier.

  14. The information content of high-frequency seismograms and the near-surface geologic structure of "hard rock" recording sites

    USGS Publications Warehouse

    Cranswick, E.

    1988-01-01

    Due to hardware developments in the last decade, the high-frequency end of the frequency band of seismic waves analyzed for source mechanisms has been extended into the audio-frequency range (>20 Hz). In principle, the short wavelengths corresponding to these frequencies can provide information about the details of seismic sources, but in fact, much of the "signal" is the site response of the nearsurface. Several examples of waveform data recorded at "hard rock" sites, which are generally assumed to have a "flat" transfer function, are presented to demonstrate the severe signal distortions, including fmax, produced by near-surface structures. Analysis of the geology of a number of sites indicates that the overall attenuation of high-frequency (>1 Hz) seismic waves is controlled by the whole-path-Q between source and receiver but the presence of distinct fmax site resonance peaks is controlled by the nature of the surface layer and the underlying near-surface structure. Models of vertical decoupling of the surface and nearsurface and horizontal decoupling of adjacent sites on hard rock outcrops are proposed and their behaviour is compared to the observations of hard rock site response. The upper bound to the frequency band of the seismic waves that contain significant source information which can be deconvolved from a site response or an array response is discussed in terms of fmax and the correlation of waveform distortion with the outcrop-scale geologic structure of hard rock sites. It is concluded that although the velocity structures of hard rock sites, unlike those of alluvium sites, allow some audio-frequency seismic energy to propagate to the surface, the resulting signals are a highly distorted, limited subset of the source spectra. ?? 1988 Birkha??user Verlag.

  15. USRD type F63 transducer

    NASA Astrophysics Data System (ADS)

    Jevnager, M. D.; Tims, A. C.

    1981-11-01

    A small reversible audio frequency range transducer was developed. The type F63 transducer is designed to meet the specific needs of the user. It is sensitive and stable with temperature and moderate hydrostatic pressures as required by Naval Mine Engineering Facility to improve their mission capability.

  16. 47 CFR 95.669 - External controls.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Audio frequency power amplifier output connector and selector switch. (5) On-off switch for primary power to transmitter. This switch may be combined with receiver controls such as the receiver on-off switch and volume control. (6) Upper/lower sideband selector switch (for a transmitter that transmits...

  17. Computer Series, 86. Bits and Pieces, 35.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1987-01-01

    Describes eight applications of the use of computers in teaching chemistry. Includes discussions of audio frequency measurements of heat capacity ratios, quantum mechanics, ab initio calculations, problem solving using spreadsheets, simplex optimization, faradaic impedance diagrams, and the recording and tabulation of student laboratory data. (TW)

  18. Geophysical exploration with audio frequency magnetic fields

    NASA Astrophysics Data System (ADS)

    Labson, V. F.

    1985-12-01

    Experience with the Audio Frequency Magnetic (AFMAG) method has demonstrated that an electromagnetic exploration system using the Earth's natural audiofrequency magnetic fields as an energy source, is capable of mapping subsurface electrical structure in the upper kilometer of the Earth's crust. The limitations are resolved by adapting the tensor analysis and remote reference noise bias removal techniques from the geomagnetic induction and magnetotelluric methods to the computation of the tippers. After a through spectral study of the natural magnetic fields, lightweight magnetic field sensors, capable of measuring the magnetic field throughout the year were designed. A digital acquisition and processing sytem, with the ability to provide audiofrequency tipper results in the field, was then built to complete the apparatus. The new instrumetnation was used in a study of the Mariposa, California site previously mapped with AFMAG. The usefulness of natural magnetic field data in mapping an electrical conductive body was again demonstrated. Several field examples are used to demonstrate that the proposed procedure yields reasonable results.

  19. Safety of the HyperSound® Audio System in Subjects with Normal Hearing.

    PubMed

    Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L

    2015-06-11

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  20. Safety of the HyperSound® Audio System in Subjects with Normal Hearing

    PubMed Central

    Mattson, Sara L.; Kappus, Brian A.; Seitzman, Robin L.

    2015-01-01

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions. PMID:26779330

  1. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  2. On-line Tool Wear Detection on DCMT070204 Carbide Tool Tip Based on Noise Cutting Audio Signal using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Prasetyo, T.; Amar, S.; Arendra, A.; Zam Zami, M. K.

    2018-01-01

    This study develops an on-line detection system to predict the wear of DCMT070204 tool tip during the cutting process of the workpiece. The machine used in this research is CNC ProTurn 9000 to cut ST42 steel cylinder. The audio signal has been captured using the microphone placed in the tool post and recorded in Matlab. The signal is recorded at the sampling rate of 44.1 kHz, and the sampling size of 1024. The recorded signal is 110 data derived from the audio signal while cutting using a normal chisel and a worn chisel. And then perform signal feature extraction in the frequency domain using Fast Fourier Transform. Feature selection is done based on correlation analysis. And tool wear classification was performed using artificial neural networks with 33 input features selected. This artificial neural network is trained with back propagation method. Classification performance testing yields an accuracy of 74%.

  3. An acoustic metamaterial composed of multi-layer membrane-coated perforated plates for low-frequency sound insulation

    NASA Astrophysics Data System (ADS)

    Fan, Li; Chen, Zhe; Zhang, Shu-yi; Ding, Jin; Li, Xiao-juan; Zhang, Hui

    2015-04-01

    Insulating against low-frequency sound (below 500 Hz ) remains challenging despite the progress that has been achieved in sound insulation and absorption. In this work, an acoustic metamaterial based on membrane-coated perforated plates is presented for achieving sound insulation in a low-frequency range, even covering the lower audio frequency limit, 20 Hz . Theoretical analysis and finite element simulations demonstrate that this metamaterial can effectively block acoustic waves over a wide low-frequency band regardless of incident angles. Two mechanisms, non-resonance and monopolar resonance, operate in the metamaterial, resulting in a more powerful sound insulation ability than that achieved using periodically arranged multi-layer solid plates.

  4. Detection of goal events in soccer videos

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  5. Inexpensive Audio Activities: Earbud-Based Sound Experiments

    ERIC Educational Resources Information Center

    Allen, Joshua; Boucher, Alex; Meggison, Dean; Hruby, Kate; Vesenka, James

    2016-01-01

    Inexpensive alternatives to a number of classic introductory physics sound laboratories are presented including interference phenomena, resonance conditions, and frequency shifts. These can be created using earbuds, economical supplies such as Giant Pixie Stix® wrappers, and free software available for PCs and mobile devices. We describe two…

  6. A 16-channel cassette tape recorder system for clinical EEGs.

    PubMed

    Barlow, J S

    1975-02-01

    A 16-channel EEG tape recorder system having a frequency response of DC-100 Hz for each channel is described. The system utilized standard commercially available highfidelity audio tape decks in conjunction with specially designed circuits for time-division multiplexing a balanced amplitude modulation

  7. A Case Study in Acoustical Design.

    ERIC Educational Resources Information Center

    Ledford, Bruce R.; Brown, John A.

    1992-01-01

    Addresses concerns of both facilities planners and instructional designers in planning for the audio component of group presentations. Factors in the architectural design of enclosures for the reproduction of sound are described, including frequency, amplitude, and reverberation; and a case study for creating an acceptable enclosure is presented.…

  8. Power line detection system

    DOEpatents

    Latorre, Victor R.; Watwood, Donald B.

    1994-01-01

    A short-range, radio frequency (RF) transmitting-receiving system that provides both visual and audio warnings to the pilot of a helicopter or light aircraft of an up-coming power transmission line complex. Small, milliwatt-level narrowband transmitters, powered by the transmission line itself, are installed on top of selected transmission line support towers or within existing warning balls, and provide a continuous RF signal to approaching aircraft. The on-board receiver can be either a separate unit or a portion of the existing avionics, and can also share an existing antenna with another airborne system. Upon receipt of a warning signal, the receiver will trigger a visual and an audio alarm to alert the pilot to the potential power line hazard.

  9. Digital signal processing techniques for pitch shifting and time scaling of audio signals

    NASA Astrophysics Data System (ADS)

    Buś, Szymon; Jedrzejewski, Konrad

    2016-09-01

    In this paper, we present the techniques used for modifying the spectral content (pitch shifting) and for changing the time duration (time scaling) of an audio signal. A short introduction gives a necessary background for understanding the discussed issues and contains explanations of the terms used in the paper. In subsequent sections we present three different techniques appropriate both for pitch shifting and for time scaling. These techniques use three different time-frequency representations of a signal, namely short-time Fourier transform (STFT), continuous wavelet transform (CWT) and constant-Q transform (CQT). The results of simulation studies devoted to comparison of the properties of these methods are presented and discussed in the paper.

  10. Parkinson's disease and the effect of lexical factors on vowel articulation.

    PubMed

    Watson, Peter J; Munson, Benjamin

    2008-11-01

    Lexical factors (i.e., word frequency and phonological neighborhood density) influence speech perception and production. It is unknown if these factors are affected by Parkinson's disease (PD). Ten men with PD and ten healthy men read CVC words (varying orthogonally for word frequency and density) aloud while audio recorded. Acoustic analysis was performed on duration and Bark-scaled F1-F2 values of the vowels contained in the words. Vowel space was larger for low-frequency words from dense neighborhoods than from sparse ones for both groups. However, the participants with PD did not show an effect of density on dispersion for high-frequency words.

  11. An improved nuclear magnetic resonance spectrometer

    NASA Technical Reports Server (NTRS)

    Elleman, D. D.; Manatt, S. L.

    1967-01-01

    Cylindrical sample container provides a high degree of nuclear stabilization to a nuclear magnetic resonance /nmr/ spectrometer. It is placed coaxially about the nmr insert and contains reference sample that gives a signal suitable for locking the field and frequency of an nmr spectrometer with a simple audio modulation system.

  12. 75 FR 27779 - Sunshine Act Meeting; Open Commission Meeting; Thursday, May 20, 2010

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... of Rules and Policies for the Digital Audio Radio Satellite Service in the 2310-2360 MHz Frequency... licensees. 3 WIRELINE TITLE: Schools and COMPETITION. Libraries Universal Service Support Mechanism (CC... broadband more accessible in schools and libraries, and to cut red tape. 4 WIRELINE TITLE: COMPETITION...

  13. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  14. Apparatus and method for non-contact, acoustic resonance determination of intraocular pressure

    DOEpatents

    Sinha, Dipen N.; Wray, William O.

    1994-01-01

    Apparatus and method for measuring intraocular pressure changes in an eye under investigation by detection of vibrational resonances therein. An ultrasonic transducer operating at its resonant frequency is amplitude modulated and swept over a range of audio frequencies in which human eyes will resonate. The output therefrom is focused onto the eye under investigation, and the resonant vibrations of the eye observed using a fiber-optic reflection vibration sensor. Since the resonant frequency of the eye is dependent on the pressure therein, changes in intraocular pressure may readily be determined after a baseline pressure is established.

  15. Apparatus and method for non-contact, acoustic resonance determination of intraocular pressure

    DOEpatents

    Sinha, D.N.; Wray, W.O.

    1994-12-27

    The apparatus and method for measuring intraocular pressure changes in an eye under investigation by detection of vibrational resonances therein. An ultrasonic transducer operating at its resonant frequency is amplitude modulated and swept over a range of audio frequencies in which human eyes will resonate. The output therefrom is focused onto the eye under investigation, and the resonant vibrations of the eye observed using a fiber-optic reflection vibration sensor. Since the resonant frequency of the eye is dependent on the pressure therein, changes in intraocular pressure may readily be determined after a baseline pressure is established. 3 figures.

  16. A Computationally Efficient Algorithm for Disturbance Cancellation to Meet the Requirements for Optical Payloads in Satellites

    DTIC Science & Technology

    2001-09-01

    starting from the energy approach, but unfortunately the geometry assumed in their work does not apply to the hexapods available at the Satellite...harmonics multiple of 1Hz, which was the difference between the two frequencies. The two assigned frequencies were actually suppressed, but the energy ...Audio Processing, Vol. 3, No. 3, May 1995, pp. 217–222. [20] Li, D. and Salcudean, S. E., “Modeling, Simulation and Control of a Hidraulic Stewart Plat

  17. Apparatus for monitoring X-ray beam alignment

    DOEpatents

    Steinmeyer, Peter A.

    1991-10-08

    A self-contained, hand-held apparatus is provided for minitoring alignment of an X-ray beam in an instrument employing an X-ray source. The apparatus includes a transducer assembly containing a photoresistor for providing a range of electrical signals responsive to a range of X-ray beam intensities from the X-ray beam being aligned. A circuit, powered by a 7.5 VDC power supply and containing an audio frequency pulse generator whose frequency varies with the resistance of the photoresistor, is provided for generating a range of audible sounds. A portion of the audible range corresponds to low X-ray beam intensity. Another portion of the audible range corresponds to high X-ray beam intensity. The transducer assembly may include an a photoresistor, a thin layer of X-ray fluorescent material, and a filter layer transparent to X-rays but opaque to visible light. X-rays from the beam undergoing alignment penetrate the filter layer and excite the layer of fluorescent material. The light emitted from the fluorescent material alters the resistance of the photoresistor which is in the electrical circuit including the audio pulse generator and a speaker. In employing the apparatus, the X-ray beam is aligned to a complete alignment by adjusting the X-ray beam to produce an audible sound of the maximum frequency.

  18. Apparatus for monitoring X-ray beam alignment

    DOEpatents

    Steinmeyer, P.A.

    1991-10-08

    A self-contained, hand-held apparatus is provided for monitoring alignment of an X-ray beam in an instrument employing an X-ray source. The apparatus includes a transducer assembly containing a photoresistor for providing a range of electrical signals responsive to a range of X-ray beam intensities from the X-ray beam being aligned. A circuit, powered by a 7.5 VDC power supply and containing an audio frequency pulse generator whose frequency varies with the resistance of the photoresistor, is provided for generating a range of audible sounds. A portion of the audible range corresponds to low X-ray beam intensity. Another portion of the audible range corresponds to high X-ray beam intensity. The transducer assembly may include an a photoresistor, a thin layer of X-ray fluorescent material, and a filter layer transparent to X-rays but opaque to visible light. X-rays from the beam undergoing alignment penetrate the filter layer and excite the layer of fluorescent material. The light emitted from the fluorescent material alters the resistance of the photoresistor which is in the electrical circuit including the audio pulse generator and a speaker. In employing the apparatus, the X-ray beam is aligned to a complete alignment by adjusting the X-ray beam to produce an audible sound of the maximum frequency. 2 figures.

  19. Power line detection system

    DOEpatents

    Latorre, V.R.; Watwood, D.B.

    1994-09-27

    A short-range, radio frequency (RF) transmitting-receiving system that provides both visual and audio warnings to the pilot of a helicopter or light aircraft of an up-coming power transmission line complex. Small, milliwatt-level narrowband transmitters, powered by the transmission line itself, are installed on top of selected transmission line support towers or within existing warning balls, and provide a continuous RF signal to approaching aircraft. The on-board receiver can be either a separate unit or a portion of the existing avionics, and can also share an existing antenna with another airborne system. Upon receipt of a warning signal, the receiver will trigger a visual and an audio alarm to alert the pilot to the potential power line hazard. 4 figs.

  20. Synthetic Modeling of A Geothermal System Using Audio-magnetotelluric (AMT) and Magnetotelluric (MT)

    NASA Astrophysics Data System (ADS)

    Mega Saputra, Rifki; Widodo

    2017-04-01

    Indonesia has 40% of the world’s potential geothermal resources with estimated capacity of 28,910 MW. Generally, the characteristic of the geothermal system in Indonesia is liquid-dominated systems, which driven by volcanic activities. In geothermal exploration, electromagnetic methods are used to map structures that could host potential reservoirs and source rocks. We want to know the responses of a geothermal system using synthetic data of Audio-magnetotelluric (AMT) and Magnetotelluric (MT). Due to frequency range, AMT and MT data can resolve the shallow and deeper structure, respectively. 1-D models have been performed using AMT and MT data. The results indicate that AMT and MT data give detailed conductivity distribution of geothermal structure.

  1. Flexible and wearable 3D graphene sensor with 141 KHz frequency signal response capability

    NASA Astrophysics Data System (ADS)

    Xu, R.; Zhang, H.; Cai, Y.; Ruan, J.; Qu, K.; Liu, E.; Ni, X.; Lu, M.; Dong, X.

    2017-09-01

    We developed a flexible force sensor consisting of 3D graphene foam (GF) encapsulated in flexible polydimethylsiloxane (PDMS). Because the 3D GF/PDMS sensor is based on the transformation of an electronic band structure aroused by static mechanical strain or KHz vibration, it can detect frequency signals by both tuning fork tests and piezoelectric ceramic transducer tests, which showed a clear linear response from audio frequencies, including frequencies up to 141 KHz in the ultrasound range. Because of their excellent response with a wide bandwidth, the 3D GF/PDMS sensors are attractive for interactive wearable devices or artificial prosthetics capable of perceiving seismic waves, ultrasonic waves, shock waves, and transient pressures.

  2. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  3. Open-Loop Audio-Visual Stimulation (AVS): A Useful Tool for Management of Insomnia?

    PubMed

    Tang, Hsin-Yi Jean; Riegel, Barbara; McCurry, Susan M; Vitiello, Michael V

    2016-03-01

    Audio Visual Stimulation (AVS), a form of neurofeedback, is a non-pharmacological intervention that has been used for both performance enhancement and symptom management. We review the history of AVS, its two sub-types (close- and open-loop), and discuss its clinical implications. We also describe a promising new application of AVS to improve sleep, and potentially decrease pain. AVS research can be traced back to the late 1800s. AVS's efficacy has been demonstrated for both performance enhancement and symptom management. Although AVS is commonly used in clinical settings, there is limited literature evaluating clinical outcomes and mechanisms of action. One of the challenges to AVS research is the lack of standardized terms, which makes systematic review and literature consolidation difficult. Future studies using AVS as an intervention should; (1) use operational definitions that are consistent with the existing literature, such as AVS, Audio-visual Entrainment, or Light and Sound Stimulation, (2) provide a clear rationale for the chosen training frequency modality, (3) use a randomized controlled design, and (4) follow the Consolidated Standards of Reporting Trials and/or related guidelines when disseminating results.

  4. Christmas Light Display

    NASA Astrophysics Data System (ADS)

    Ross, Arthur; Renfro, Timothy

    2012-03-01

    The Digital Electronics class at McMurry University created a Christmas light display that toggles the power of different strands of lights, according to what frequencies are played in a song, as an example of an analog to digital circuit. This was accomplished using a BA3830S IC six-band audio filter and six solid-state relays.

  5. The Hope of Audacity[R] (to Teach Acoustics)

    ERIC Educational Resources Information Center

    Groppe, Jennifer

    2011-01-01

    When working on an oral history project, my brother recommended that I download a free audio recording and editing program called Audacity[R]. I have since discovered that it is a fantastic tool for students to visualize sound waves and to understand the meaning of amplitude, frequency, and superposition. This paper describes a collection of…

  6. Coupled Electro-Magneto-Mechanical-Acoustic Analysis Method Developed by Using 2D Finite Element Method for Flat Panel Speaker Driven by Magnetostrictive-Material-Based Actuator

    NASA Astrophysics Data System (ADS)

    Yoo, Byungjin; Hirata, Katsuhiro; Oonishi, Atsurou

    In this study, a coupled analysis method for flat panel speakers driven by giant magnetostrictive material (GMM) based actuator was developed. The sound field produced by a flat panel speaker that is driven by a GMM actuator depends on the vibration of the flat panel, this vibration is a result of magnetostriction property of the GMM. In this case, to predict the sound pressure level (SPL) in the audio-frequency range, it is necessary to take into account not only the magnetostriction property of the GMM but also the effect of eddy current and the vibration characteristics of the actuator and the flat panel. In this paper, a coupled electromagnetic-structural-acoustic analysis method is presented; this method was developed by using the finite element method (FEM). This analysis method is used to predict the performance of a flat panel speaker in the audio-frequency range. The validity of the analysis method is verified by comparing with the measurement results of a prototype speaker.

  7. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun

    2014-04-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.

  8. Seismic characteristics of tensile fracture growth induced by hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.; Van der Baan, M.; Boroumand, N.

    2014-12-01

    Hydraulic fracturing is a process of injecting high-pressure slurry into a rockmass to enhance its permeability. Variants of this process are used for unconventional oil and gas development, engineered geothermal systems and block-cave mining; similar processes occur within volcanic systems. Opening of hydraulic fractures is well documented by mineback trials and tiltmeter monitoring and is a physical requirement to accommodate the volume of injected fluid. Numerous microseismic monitoring investigations acquired in the audio-frequency band are interpreted to show a prevalence of shear-dominated failure mechanisms surrounding the tensile fracture. Moreover, the radiated seismic energy in the audio-frequency band appears to be a miniscule fraction (<< 1%) of the net injected energy, i.e., the integral of the product of fluid pressure and injection rate. We use a simple penny-shaped crack model as a predictive framework to describe seismic characteristics of tensile opening during hydraulic fracturing. This model provides a useful scaling relation that links seismic moment to effective fluid pressure within the crack. Based on downhole recordings corrected for attenuation, a significant fraction of observed microseismic events are characterized by S/P amplitude ratio < 5. Despite the relatively small aperture of the monitoring arrays, which precludes both full moment-tensor analysis and definitive identification of nodal planes or axes, this ratio provides a strong indication that observed microseismic source mechanisms have a component of tensile failure. In addition, we find some instances of periodic spectral notches that can be explained by an opening/closing failure mechanism, in which fracture propagation outpaces fluid velocity within the crack. Finally, aseismic growth of tensile fractures may be indicative of a scenario in which injected energy is consumed to create new fracture surfaces. Taken together, our observations and modeling provide evidence that failure mechanisms documented by passive monitoring of hydraulic fractures may contain a significant component of tensile failure, including fracture opening and closing, although creation of extensive new fracture surfaces may be a seismically inefficient process that radiates at sub-audio frequencies.

  9. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man

    PubMed Central

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295

  10. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man.

    PubMed

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.

  11. An extraordinary tabletop speed of light apparatus

    NASA Astrophysics Data System (ADS)

    Pegna, Guido

    2017-09-01

    A compact, low-cost, pre-aligned apparatus of the modulation type is described. The apparatus allows accurate determination of the speed of light in free propagation with an accuracy on the order of one part in 104. Due to the 433.92 MHz radio frequency (rf) modulation of its laser diode, determination of the speed of light is possible within a sub-meter measuring base and in small volumes (some cm3) of transparent solids or liquids. No oscilloscope is necessary, while the required function generators, power supplies, and optical components are incorporated into the design of the apparatus and its receiver can slide along the optical bench while maintaining alignment with the laser beam. Measurement of the velocity factor of coaxial cables is also easily performed. The apparatus detects the phase difference between the rf modulation of the laser diode by further modulating the rf signal with an audio frequency signal; the phase difference between these signals is then observed as the loudness of the audio signal. In this way, the positions at which the minima of the audio signal are found determine where the rf signals are completely out of phase. This phase detection method yields a much increased sensitivity with respect to the display of coincidence of two signals of questionable arrival time and somewhat distorted shape on an oscilloscope. The displaying technique is also particularly suitable for large audiences as well as in unattended exhibits in museums and science centers. In addition, the apparatus can be set up in less than one minute.

  12. Low cost audiovisual playback and recording triggered by radio frequency identification using Raspberry Pi.

    PubMed

    Lendvai, Ádám Z; Akçay, Çağlar; Weiss, Talia; Haussmann, Mark F; Moore, Ignacio T; Bonier, Frances

    2015-01-01

    Playbacks of visual or audio stimuli to wild animals is a widely used experimental tool in behavioral ecology. In many cases, however, playback experiments are constrained by observer limitations such as the time observers can be present, or the accuracy of observation. These problems are particularly apparent when playbacks are triggered by specific events, such as performing a specific behavior, or are targeted to specific individuals. We developed a low-cost automated playback/recording system, using two field-deployable devices: radio-frequency identification (RFID) readers and Raspberry Pi micro-computers. This system detects a specific passive integrated transponder (PIT) tag attached to an individual, and subsequently plays back the stimuli, or records audio or visual information. To demonstrate the utility of this system and to test one of its possible applications, we tagged female and male tree swallows (Tachycineta bicolor) from two box-nesting populations with PIT tags and carried out playbacks of nestling begging calls every time focal females entered the nestbox over a six-hour period. We show that the RFID-Raspberry Pi system presents a versatile, low-cost, field-deployable system that can be adapted for many audio and visual playback purposes. In addition, the set-up does not require programming knowledge, and it easily customized to many other applications, depending on the research questions. Here, we discuss the possible applications and limitations of the system. The low cost and the small learning curve of the RFID-Raspberry Pi system provides a powerful new tool to field biologists.

  13. Low cost audiovisual playback and recording triggered by radio frequency identification using Raspberry Pi

    PubMed Central

    Akçay, Çağlar; Weiss, Talia; Haussmann, Mark F.; Moore, Ignacio T.; Bonier, Frances

    2015-01-01

    Playbacks of visual or audio stimuli to wild animals is a widely used experimental tool in behavioral ecology. In many cases, however, playback experiments are constrained by observer limitations such as the time observers can be present, or the accuracy of observation. These problems are particularly apparent when playbacks are triggered by specific events, such as performing a specific behavior, or are targeted to specific individuals. We developed a low-cost automated playback/recording system, using two field-deployable devices: radio-frequency identification (RFID) readers and Raspberry Pi micro-computers. This system detects a specific passive integrated transponder (PIT) tag attached to an individual, and subsequently plays back the stimuli, or records audio or visual information. To demonstrate the utility of this system and to test one of its possible applications, we tagged female and male tree swallows (Tachycineta bicolor) from two box-nesting populations with PIT tags and carried out playbacks of nestling begging calls every time focal females entered the nestbox over a six-hour period. We show that the RFID-Raspberry Pi system presents a versatile, low-cost, field-deployable system that can be adapted for many audio and visual playback purposes. In addition, the set-up does not require programming knowledge, and it easily customized to many other applications, depending on the research questions. Here, we discuss the possible applications and limitations of the system. The low cost and the small learning curve of the RFID-Raspberry Pi system provides a powerful new tool to field biologists. PMID:25870771

  14. Dynamic conductivity from audio to optical frequencies of semiconducting manganites approaching the metal-insulator transition

    NASA Astrophysics Data System (ADS)

    Lunkenheimer, P.; Mayr, F.; Loidl, A.

    2006-07-01

    We report the frequency-dependent conductivity of the manganite system La1-xSrxMnO3 (x0.2) when approaching the metal-insulator transition from the insulating side. Results from low-frequency dielectric measurements are combined with spectra in the infrared region. For low doping levels the behavior is dominated by hopping transport of localized charge carriers at low frequencies and by phononic and electronic excitations in the infrared region. For the higher Sr contents the approach of the metallic state is accompanied by the successive suppression of the hopping contribution at low frequencies and by the development of polaronic excitations in the infrared region, which finally become superimposed by a strong Drude contribution in the fully metallic state.

  15. Method for determining depth and shape of a sub-surface conductive object

    NASA Astrophysics Data System (ADS)

    Lee, D. O.; Montoya, P. C.; Wayland, J. R., Jr.

    1984-06-01

    The depth to and size of an underground object may be determined by sweeping a controlled source audio magnetotelluric (CSAMT) signal and locating a peak response when the receiver spans the edge of the object. The depth of the object is one quarter wavelength in the subsurface media of the frequency of the peak.

  16. 47 CFR 73.756 - System specifications for double-sideband (DBS) modulated emissions in the HF broadcasting service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false System specifications for double-sideband (DBS... Stations § 73.756 System specifications for double-sideband (DBS) modulated emissions in the HF... processing. If audio-frequency signal processing is used, the dynamic range of the modulating signal shall be...

  17. Detection of Metallic and Electronic Radar Targets by Acoustic Modulation of Electromagnetic Waves

    DTIC Science & Technology

    2017-07-01

    reradiated wave is captured by the radar’s receive antenna. The presence of measurable EM energy at any discrete multiple of the audio frequency away...the radar receiver (Rx). The presence of measurable EM energy at any discrete multiple of faudio away from the original RF carrier fRF (i.e., at any n

  18. 37 CFR 270.3 - Reports of use of sound recordings under statutory license for nonsubscription transmission...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... States over the relevant channels or stations, and from any archived programs, that provide audio... particular channel or program only once during the two-week reporting period, then the play frequency is one... the ISRC, the (A) Album title; and (B) Marketing label; (vi) The actual total performances of the...

  19. An automatic audio-magnetotelluric equipment, controlled by microprocessor, for the telesurvellance of the volcano Momotombo (Nicaragua)

    NASA Astrophysics Data System (ADS)

    Clerc, G.; Décriaud, J.-P.; Doyen, G.; Halbwachs, M.; Henrotte, M.; Rémy, J.; Zhang, X.-C.

    1984-07-01

    A campaign of audio-magnetotelluric soundings in the crater of Momotombo has shown that the structure of this volcano is suitable to a surveillance by this method; it has an only central activity, and the resistivity versus the depth decreases suddenly at least 100 times at about 270 m. So, this technical paper describes the equipment designed and built in order to measure every 2 hr the apparent resistivity in a given direction, at seven frequencies regularly spaced from 5 to 312 Hz. The data are preprocessed to fill only 32 bytes: they are - on the one hand, printed locally for the close surveillance; - on the other hand, transmitted by an ARGOS platform and received in Garchy via satellite for searching studies

  20. New radio meteor detecting and logging software

    NASA Astrophysics Data System (ADS)

    Kaufmann, Wolfgang

    2017-08-01

    A new piece of software ``Meteor Logger'' for the radio observation of meteors is described. It analyses an incoming audio stream in the frequency domain to detect a radio meteor signal on the basis of its signature, instead of applying an amplitude threshold. For that reason the distribution of the three frequencies with the highest spectral power are considered over the time (3f method). An auto notch algorithm is developed to prevent the radio meteor signal detection from being jammed by a present interference line. The results of an exemplary logging session are discussed.

  1. Multi-Scale Scattering Transform in Music Similarity Measuring

    NASA Astrophysics Data System (ADS)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  2. Distortion products in auditory fMRI research: Measurements and solutions.

    PubMed

    Norman-Haignere, Sam; McDermott, Josh H

    2016-04-01

    Nonlinearities in the cochlea can introduce audio frequencies that are not present in the sound signal entering the ear. Known as distortion products (DPs), these added frequencies complicate the interpretation of auditory experiments. Sound production systems also introduce distortion via nonlinearities, a particular concern for fMRI research because the Sensimetrics earphones widely used for sound presentation are less linear than most high-end audio devices (due to design constraints). Here we describe the acoustic and neural effects of cochlear and earphone distortion in the context of fMRI studies of pitch perception, and discuss how their effects can be minimized with appropriate stimuli and masking noise. The amplitude of cochlear and Sensimetrics earphone DPs were measured for a large collection of harmonic stimuli to assess effects of level, frequency, and waveform amplitude. Cochlear DP amplitudes were highly sensitive to the absolute frequency of the DP, and were most prominent at frequencies below 300 Hz. Cochlear DPs could thus be effectively masked by low-frequency noise, as expected. Earphone DP amplitudes, in contrast, were highly sensitive to both stimulus and DP frequency (due to prominent resonances in the earphone's transfer function), and their levels grew more rapidly with increasing stimulus level than did cochlear DP amplitudes. As a result, earphone DP amplitudes often exceeded those of cochlear DPs. Using fMRI, we found that earphone DPs had a substantial effect on the response of pitch-sensitive cortical regions. In contrast, cochlear DPs had a small effect on cortical fMRI responses that did not reach statistical significance, consistent with their lower amplitudes. Based on these findings, we designed a set of pitch stimuli optimized for identifying pitch-responsive brain regions using fMRI. These stimuli robustly drive pitch-responsive brain regions while producing minimal cochlear and earphone distortion, and will hopefully aid fMRI researchers in avoiding distortion confounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Distortion Products in Auditory fMRI Research: Measurements and Solutions

    PubMed Central

    Norman-Haignere, Sam; McDermott, Josh H.

    2016-01-01

    Nonlinearities in the cochlea can introduce audio frequencies that are not present in the sound signal entering the ear. Known as distortion products (DPs), these added frequencies complicate the interpretation of auditory experiments. Sound production systems also introduce distortion via nonlinearities, a particular concern for fMRI research because the Sensimetrics earphones widely used for sound presentation are less linear than most high-end audio devices (due to design constraints). Here we describe the acoustic and neural effects of cochlear and earphone distortion in the context of fMRI studies of pitch perception, and discuss how their effects can be minimized with appropriate stimuli and masking noise. The amplitude of cochlear and Sensimetrics earphone DPs were measured for a large collection of harmonic stimuli to assess effects of level, frequency, and waveform amplitude. Cochlear DP amplitudes were highly sensitive to the absolute frequency of the DP, and were most prominent at frequencies below 300 Hz. Cochlear DPs could thus be effectively masked by low-frequency noise, as expected. Earphone DP amplitudes, in contrast, were highly sensitive to both stimulus and DP frequency (due to prominent resonances in the earphone’s transfer function), and their levels grew more rapidly with increasing stimulus level than did cochlear DP amplitudes. As a result, earphone DP amplitudes often exceeded those of cochlear DPs. Using fMRI, we found that earphone DPs had a substantial effect on the response of pitch-sensitive cortical regions. In contrast, cochlear DPs had a small effect on cortical fMRI responses that did not reach statistical significance, consistent with their lower amplitudes. Based on these findings, we designed a set of pitch stimuli optimized for identifying pitch-responsive brain regions using fMRI. These stimuli robustly drive pitch-responsive brain regions while producing minimal cochlear and earphone distortion, and will hopefully aid fMRI researchers in avoiding distortion confounds. PMID:26827809

  4. A third-order class-D amplifier with and without ripple compensation

    NASA Astrophysics Data System (ADS)

    Cox, Stephen M.; du Toit Mouton, H.

    2018-06-01

    We analyse the nonlinear behaviour of a third-order class-D amplifier, and demonstrate the remarkable effectiveness of the recently introduced ripple compensation (RC) technique in reducing the audio distortion of the device. The amplifier converts an input audio signal to a high-frequency train of rectangular pulses, whose widths are modulated according to the input signal (pulse-width modulation) and employs negative feedback. After determining the steady-state operating point for constant input and calculating its stability, we derive a small-signal model (SSM), which yields in closed form the transfer function relating (infinitesimal) input and output disturbances. This SSM shows how the RC technique is able to linearise the small-signal response of the device. We extend this SSM through a fully nonlinear perturbation calculation of the dynamics of the amplifier, based on the disparity in time scales between the pulse train and the audio signal. We obtain the nonlinear response of the amplifier to a general audio signal, avoiding the linearisation inherent in the SSM; we thereby more precisely quantify the reduction in distortion achieved through RC. Finally, simulations corroborate our theoretical predictions and illustrate the dramatic deterioration in performance that occurs when the amplifier is operated in an unstable regime. The perturbation calculation is rather general, and may be adapted to quantify the way in which other nonlinear negative-feedback pulse-modulated devices track a time-varying input signal that slowly modulates the system parameters.

  5. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8 degrees for speech and less than 4 degrees with a pink noise burst. The results allow for the density of WFS systems to be selected from the required localization accuracy. Also, by exploiting the ventriloquist effect, the angular resolution of an audio rendering may be reduced when combined with spatially-accurate video.

  6. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.

  7. Recognition of Activities of Daily Living Based on Environmental Analyses Using Audio Fingerprinting Techniques: A Systematic Review

    PubMed Central

    Santos, Rui; Pombo, Nuno; Flórez-Revuelta, Francisco

    2018-01-01

    An increase in the accuracy of identification of Activities of Daily Living (ADL) is very important for different goals of Enhanced Living Environments and for Ambient Assisted Living (AAL) tasks. This increase may be achieved through identification of the surrounding environment. Although this is usually used to identify the location, ADL recognition can be improved with the identification of the sound in that particular environment. This paper reviews audio fingerprinting techniques that can be used with the acoustic data acquired from mobile devices. A comprehensive literature search was conducted in order to identify relevant English language works aimed at the identification of the environment of ADLs using data acquired with mobile devices, published between 2002 and 2017. In total, 40 studies were analyzed and selected from 115 citations. The results highlight several audio fingerprinting techniques, including Modified discrete cosine transform (MDCT), Mel-frequency cepstrum coefficients (MFCC), Principal Component Analysis (PCA), Fast Fourier Transform (FFT), Gaussian mixture models (GMM), likelihood estimation, logarithmic moduled complex lapped transform (LMCLT), support vector machine (SVM), constant Q transform (CQT), symmetric pairwise boosting (SPB), Philips robust hash (PRH), linear discriminant analysis (LDA) and discrete cosine transform (DCT). PMID:29315232

  8. A modified mole cricket lure and description of Scapteriscus borellii (Orthoptera: Gryllotalpidae) range expansion and calling song in California.

    PubMed

    Dillman, Adler R; Cronin, Christopher J; Tang, Joseph; Gray, David A; Sternberg, Paul W

    2014-02-01

    Invasive mole cricket species in the genus Scapteriscus have become significant agricultural pests and are continuing to expand their range in North America. Though largely subterranean, adults of some species, such as Scapteriscus borellii Giglio-Tos 1894, are capable of long dispersive flights and phonotaxis to male calling songs to find suitable habitats and mates. Mole crickets in the genus Scapteriscus are known to be attracted to and can be caught by audio lure traps that broadcast synthesized or recorded calling songs. We report improvements in the design and production of electronic controllers for the automation of semipermanent mole cricket trap lures as well as highly portable audio trap collection designs. Using these improved audio lure traps, we collected the first reported individuals of the pest mole cricket S. borellii in California. We describe several characteristic features of the calling song of the California population including that the pulse rate is a function of soil temperature, similar to Florida populations of S. borellii. Further, we show that other calling song characteristics (carrier frequency, intensity, and pulse rate) are significantly different between the populations.

  9. Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.

    PubMed

    Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming

    2017-02-01

    Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.

  10. Modeling the directivity of parametric loudspeaker

    NASA Astrophysics Data System (ADS)

    Shi, Chuang; Gan, Woon-Seng

    2012-09-01

    The emerging applications of the parametric loudspeaker, such as 3D audio, demands accurate directivity control at the audible frequency (i.e. the difference frequency). Though the delay-and-sum beamforming has been proven adequate to adjust the steering angles of the parametric loudspeaker, accurate prediction of the mainlobe and sidelobes remains a challenging problem. It is mainly because of the approximations that are used to derive the directivity of the difference frequency from the directivity of the primary frequency, and the mismatches between the theoretical directivity and the measured directivity caused by system errors incurred at different stages of the implementation. In this paper, we propose a directivity model of the parametric loudspeaker. The directivity model consists of two tuning vectors corresponding to the spacing error and the weight error for the primary frequency. The directivity model adopts a modified form of the product directivity principle for the difference frequency to further improve the modeling accuracy.

  11. Method for determining depth and shape of a sub-surface conductive object

    DOEpatents

    Lee, D.O.; Montoya, P.C.; Wayland, Jr.

    1984-06-27

    The depth to and size of an underground object may be determined by sweeping a controlled source audio magnetotelluric (CSAMT) signal and locating a peak response when the receiver spans the edge of the object. The depth of the object is one quarter wavelength in the subsurface media of the frequency of the peak. 3 figures.

  12. Electrostatic Graphene Loudspeaker

    DTIC Science & Technology

    2013-06-01

    millennia, with classic examples being drum- heads and whistles for long-range communications and entertainment .4 In modern society, efficient small...harmonic oscilla- tor. Unlike most insect or musical instrument resonators which exhibit lightly damped sharp frequency response, a wide-band audio...sound signal is introduced from a signal generator or from a commercial laptop or digital music player. The maximum amplitude of the input signal Vin

  13. Can You Hear Me Now? Come in Loud and Clear with a Wireless Classroom Audio System

    ERIC Educational Resources Information Center

    Smith, Mark

    2006-01-01

    As school performance under NCLB becomes increasingly important, districts can not afford to have barriers to learning. That is where wireless sound-field amplification systems come into play. Wireless sound-field amplification systems come in two types: radio frequency (RF) and infrared (IR). RF systems are based on FCC-approved FM and UHF bands…

  14. Laboratory Investigation of Noise-Canceling Headphones Utilizing ``Mr. Blockhead''

    NASA Astrophysics Data System (ADS)

    Koser, John

    2013-09-01

    While I was co-teaching an introductory course in musical acoustics a few years ago, our class investigated several pieces of equipment designed for audio purposes. One piece of such equipment was a pair of noise-canceling headphones. Our students were curious as to how these devices were in eliminating background noise and whether they indeed block low-frequency sounds as advertised.

  15. Fiber optic multiplex optical transmission system

    NASA Technical Reports Server (NTRS)

    Bell, C. H. (Inventor)

    1977-01-01

    A multiplex optical transmission system which minimizes external interference while simultaneously receiving and transmitting video, digital data, and audio signals is described. Signals are received into subgroup mixers for blocking into respective frequency ranges. The outputs of these mixers are in turn fed to a master mixer which produces a composite electrical signal. An optical transmitter connected to the master mixer converts the composite signal into an optical signal and transmits it over a fiber optic cable to an optical receiver which receives the signal and converts it back to a composite electrical signal. A de-multiplexer is coupled to the output of the receiver for separating the composite signal back into composite video, digital data, and audio signals. A programmable optic patch board is interposed in the fiber optic cables for selectively connecting the optical signals to various receivers and transmitters.

  16. Fault Detection and Diagnosis of Railway Point Machines by Sound Analysis

    PubMed Central

    Lee, Jonguk; Choi, Heesu; Park, Daihee; Chung, Yongwha; Kim, Hee-Young; Yoon, Sukhan

    2016-01-01

    Railway point devices act as actuators that provide different routes to trains by driving switchblades from the current position to the opposite one. Point failure can significantly affect railway operations, with potentially disastrous consequences. Therefore, early detection of anomalies is critical for monitoring and managing the condition of rail infrastructure. We present a data mining solution that utilizes audio data to efficiently detect and diagnose faults in railway condition monitoring systems. The system enables extracting mel-frequency cepstrum coefficients (MFCCs) from audio data with reduced feature dimensions using attribute subset selection, and employs support vector machines (SVMs) for early detection and classification of anomalies. Experimental results show that the system enables cost-effective detection and diagnosis of faults using a cheap microphone, with accuracy exceeding 94.1% whether used alone or in combination with other known methods. PMID:27092509

  17. Parametric Amplification Protocol for Frequency-Modulated Magnetic Resonance Force Microscopy Signals

    NASA Astrophysics Data System (ADS)

    Harrell, Lee; Moore, Eric; Lee, Sanggap; Hickman, Steven; Marohn, John

    2011-03-01

    We present data and theoretical signal and noise calculations for a protocol using parametric amplification to evade the inherent tradeoff between signal and detector frequency noise in force-gradient magnetic resonance force microscopy signals, which are manifested as a modulated frequency shift of a high- Q microcantilever. Substrate-induced frequency noise has a 1 / f frequency dependence, while detector noise exhibits an f2 dependence on modulation frequency f . Modulation of sample spins at a frequency that minimizes these two contributions typically results in a surface frequency noise power an order of magnitude or more above the thermal limit and may prove incompatible with sample spin relaxation times as well. We show that the frequency modulated force-gradient signal can be used to excite the fundamental resonant mode of the cantilever, resulting in an audio frequency amplitude signal that is readily detected with a low-noise fiber optic interferometer. This technique allows us to modulate the force-gradient signal at a sufficiently high frequency so that substrate-induced frequency noise is evaded without subjecting the signal to the normal f2 detector noise of conventional demodulation.

  18. Original sound compositions reduce anxiety in emergency department patients: a randomised controlled trial.

    PubMed

    Weiland, Tracey J; Jelinek, George A; Macarow, Keely E; Samartzis, Philip; Brown, David M; Grierson, Elizabeth M; Winter, Craig

    2011-12-19

    To determine whether emergency department (ED) patients' self-rated levels of anxiety are affected by exposure to purpose-designed music or sound compositions with and without the audio frequencies of embedded binaural beat. Randomised controlled trial in an ED between 1 February 2010 and 14 April 2010 among a convenience sample of adult patients who were rated as category 3 on the Australasian Triage Scale. All interventions involved listening to soundtracks of 20 minutes' duration that were purpose-designed by composers and sound-recording artists. Participants were allocated at random to one of five groups: headphones and iPod only, no soundtrack (control group); reconstructed ambient noise simulating an ED but free of clear verbalisations; electroacoustic musical composition; composed non-musical soundtracks derived from audio field recordings obtained from natural and constructed settings; sound composition of audio field recordings with embedded binaural beat. All soundtracks were presented on an iPod through headphones. Patients and researchers were blinded to allocation until interventions were administered. State-trait anxiety was self-assessed before the intervention and state anxiety was self-assessed again 20 minutes after the provision of the soundtrack. Spielberger State-Trait Anxiety Inventory. Of 291 patients assessed for eligibility, 170 patients completed the pre-intervention anxiety self-assessment and 169 completed the post-intervention assessment. Significant decreases (all P < 0.001) in anxiety level were observed among patients exposed to the electroacoustic musical composition (pre-intervention mean, 39; post-intervention mean, 34), audio field recordings (42; 35) or audio field recordings with embedded bianaural beats (43; 37) when compared with those allocated to receive simulated ED ambient noise (40; 41) or headphones only (44; 44). In moderately anxious ED patients, state anxiety was reduced by 10%-15% following exposure to purpose-designed sound interventions. Australian New Zealand Clinical Trials Registry ACTRN 12608000444381.

  19. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  20. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  1. Juno Listens to Jupiter Auroras Sing

    NASA Image and Video Library

    2016-09-02

    During its close flyby of Jupiter on August 27, 2016, the Waves instrument on NASA's Juno spacecraft received radio signals associated with the giant planet's very intense auroras. This video displays these radio emissions in a format similar to a voiceprint, showing the intensity of radio waves as a function of frequency and time. The largest intensities are indicated in warmer colors. The frequency range of these signals is from 7 to 140 kilohertz. Radio astronomers call these "kilometric emissions" because their wavelengths are about a kilometer long. The time span of this data is 13 hours, beginning shortly after Juno's closest approach to Jupiter. Accompanying this data display is an audio rendition of the radio emissions, shifted into a lower register since the radio waves are well above the audio frequency range. In the video, a cursor moves from left to right to mark the time as the sounds are heard. These radio emissions were among the first observed by early radio astronomers in the 1950s. However, until now, they had not been observed from closely above the auroras themselves. From its polar orbit vantage point, Juno has -- for the first time -- enabled observations of these emissions from very close range. The Juno team believes that Juno flew directly through the source regions for some of these emissions during this flyby, which was Juno's first with its sensors actively collecting data. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21037

  2. Electronic stethoscope with frequency shaping and infrasonic recording capabilities.

    PubMed

    Gordon, E S; Lagerwerff, J M

    1976-03-01

    A small electronic stethoscope with variable frequency response characteristics has been developed for aerospace and research applications. The system includes a specially designed piezoelectric pickup and amplifier with an overall frequency response from 0.7 to 5,000 HZ (-3 dB points) and selective bass and treble boost or cut of up to 15 dB. A steep slope, high pass filter can be switched in for ordinary clinical auscultation without overload distortion from strong infrasonic signal inputs. A commercial stethoscope-type headset, selected for best overall response, is used which can adequately handle up to 100 mW of audio power delivered from the amplifier. The active components of the amplifier consist of only four opamp-type integrated circuits.

  3. Sonification of optical coherence tomography data and images

    PubMed Central

    Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.

    2010-01-01

    Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846

  4. A Precision, Low-Cost GPS-Based Transmitter Synchronization Scheme for Improved AM Reception

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Stephen Fulton; Moore, Anthony

    2009-01-01

    This paper describes a highly accurate carrier-frequency synchronization scheme for actively, automatically locking multiple, remotely located AM broadcast transmitters to a common frequency/timing reference source such as GPS. The extremely tight frequency lock (to {approx}1 part in 10{sup 9} or better) permits the effective elimination of audible and even sub-audible beats between the local (desired) station's carrier signal and the distant stations carriers, usually received via skywave propagation during the evening and nighttime hours. These carrier-beat components cause annoying modulations of the desired station's audio at the receiver and concurrent distortion of the audio modulation from the distant station(s) andmore » often cause listeners to ldquotune outrdquo due to the low reception quality. Significant reduction or elimination of the beats and related effects will greatly enlarge the effective (interference-limited) listening area of the desired station (from 4 to 10 times as indicated in our tests) and simultaneously reduce the corresponding interference of the local transmitter to the distant stations as well. In addition, AM stereo (CQUAM) reception will be particularly improved by minimizing the phase shifts induced by co-channel interfering signals; hybrid digital (HD) signals will also benefit via reduction in beats from analog signals. The automatic frequency-control hardware described is inexpensive ($1000-$2000), requires no periodic recalibration, has essentially zero long-term drift, and could employ alternate wide-area frequency references of suitable accuracy, including broadcasts from WWVB, LORAN-C, and equivalent sources. The basic configuration of the GPS-disciplined oscillator which solves this problem is extremely simple. The main oscillator is a conventional high-stability quartz-crystal type. To counter long- term drifts, the oscillator is slightly adjusted to track a high-precision source of standard frequency obtained from a specialized GPS receiver (or other source), usually at 10.000 MHz. This very stable local reference frequency is then used as a clock for a standard digitally implemented frequency synthesizer, which is programmed to generate the specific carrier frequency desired. The stability of the disciplining source, typically {approx}1 part in 10{sup 9} to 10{sup 11}, is thus transferred to the final AM transmitter carrier output frequency.« less

  5. General trust impedes perception of self-reported primary psychopathy in thin slices of social interaction.

    PubMed

    Manson, Joseph H; Gervais, Matthew M; Bryant, Gregory A

    2018-01-01

    Little is known about people's ability to detect subclinical psychopathy from others' quotidian social behavior, or about the correlates of variation in this ability. This study sought to address these questions using a thin slice personality judgment paradigm. We presented 108 undergraduate judges (70.4% female) with 1.5 minute video thin slices of zero-acquaintance triadic conversations among other undergraduates (targets: n = 105, 57.1% female). Judges completed self-report measures of general trust, caution, and empathy. Target individuals had completed the Levenson Self-Report Psychopathy (LSRP) scale. Judges viewed the videos in one of three conditions: complete audio, silent, or audio from which semantic content had been removed using low-pass filtering. Using a novel other-rating version of the LSRP, judges' ratings of targets' primary psychopathy levels were significantly positively associated with targets' self-reports, but only in the complete audio condition. Judge general trust and target LSRP interacted, such that judges higher in general trust made less accurate judgments with respect to targets higher in primary and total psychopathy. Results are consistent with a scenario in which psychopathic traits are maintained in human populations by negative frequency dependent selection operating through the costs of detecting psychopathy in others.

  6. Audio-Visual and Autogenic Relaxation Alter Amplitude of Alpha EEG Band, Causing Improvements in Mental Work Performance in Athletes.

    PubMed

    Mikicin, Mirosław; Kowalczyk, Marek

    2015-09-01

    The aim of the present study was to investigate the effect of regular audio-visual relaxation combined with Schultz's autogenic training on: (1) the results of behavioral tests that evaluate work performance during burdensome cognitive tasks (Kraepelin test), (2) changes in classical EEG alpha frequency band, neocortex (frontal, temporal, occipital, parietal), hemisphere (left, right) versus condition (only relaxation 7-12 Hz). Both experimental (EG) and age-and skill-matched control group (CG) consisted of eighteen athletes (ten males and eight females). After 7-month training EG demonstrated changes in the amplitude of mean electrical activity of the EEG alpha bend at rest and an improvement was significantly changing and an improvement in almost all components of Kraepelin test. The same examined variables in CG were unchanged following the period without the intervention. Summing up, combining audio-visual relaxation with autogenic training significantly improves athlete's ability to perform a prolonged mental effort. These changes are accompanied by greater amplitude of waves in alpha band in the state of relax. The results suggest usefulness of relaxation techniques during performance of mentally difficult sports tasks (sports based on speed and stamina, sports games, combat sports) and during relax of athletes.

  7. What is infrasound?

    PubMed

    Leventhall, Geoff

    2007-01-01

    Definitions of infrasound and low-frequency noise are discussed and the fuzzy boundary between them described. Infrasound, in its popular definition as sound below a frequency of 20 Hz, is clearly audible, the hearing threshold having been measured down to 1.5 Hz. The popular concept that sound below 20 Hz is inaudible is not correct. Sources of infrasound are in the range from very low-frequency atmospheric fluctuations up into the lower audio frequencies. These sources include natural occurrences, industrial installations, low-speed machinery, etc. Investigations of complaints of low-frequency noise often fail to measure any significant noise. This has led some complainants to conjecture that their perception arises from non-acoustic sources, such as electromagnetic radiation. Over the past 40 years, infrasound and low-frequency noise have attracted a great deal of adverse publicity on their effects on health, based mainly on media exaggerations and misunderstandings. A result of this has been that the public takes a one-dimensional view of infrasound, concerned only by its presence, whilst ignoring its low levels.

  8. USSR and Eastern Europe Scientific Abstracts Biomedical and Behavioral Sciences No. 76

    DTIC Science & Technology

    1977-08-19

    radius around that. Histologically, indications of coagulation necrosis are found, with honeycomb-like expansion of the epidermis and other changes...eaten by the local residents, diphyllobothriasis is passed on to the human population. A number of therapeutic and prophylactic measures are proposed...means of a direct method, measurement of the pressure within the bladder upon excitation by audio frequency signals. The fish were anesthetized

  9. Artificial voice modulation in dogs by recurrent laryngeal nerve stimulation: electrophysiological confirmation of anatomic data.

    PubMed

    Broniatowski, Michael; Grundfest-Broniatowski, Sharon; Tucker, Harvey M; Tyler, Dustin J

    2007-02-01

    We hypothesized that voice may be artificially manipulated to ameliorate dystonias considered to be a failure in dynamic integration between competing neuromuscular systems. Orderly intrinsic laryngeal muscle recruitment by anodal block via the recurrent laryngeal and vagus nerves has allowed us to define specific values based on differential excitabilities, but has precluded voice fluency because of focused breaks during stimulation and the need to treat several neural conduits. Such problems may be obviated by a circuit capable of stimulating some axons while simultaneously blocking others in the recurrent laryngeal nerve, which carries innervation to all intrinsic laryngeal muscles, including the arguably intrinsic cricothyroideus. In 5 dogs, both recurrent laryngeal nerves received 40-Hz quasi-trapezoidal pulses (0 to 2000 microA, 0 to 2000 micros, 0 to 500 micros decay) via tripolar electrodes. Electromyograms were matched with audio intensities and fundamental frequencies recorded under a constant flow of humidified air. Data were digitized and evaluated for potential correlations. Orderly recruitment of the thyroarytenoideus, posterior cricoarytenoideus, and cricothyroideus was correlated with stimulating intensities (p < .001), and posterior cricoarytenoideus opposition to the thyroarytenoideus and cricothyroideus was instrumental in manipulating audio intensities and fundamental frequencies. Manipulation of canine voice parameters appears feasible via the sole recurrent laryngeal nerve within appropriate stimulation envelopes, and offers promise in human laryngeal dystonias.

  10. Audio-magnetotelluric investigation of allochthonous iron formations in the Archaean Reguibat shield (Mauritania): structural and mining implications

    NASA Astrophysics Data System (ADS)

    Bronner, G.; Fourno, J. P.

    1992-11-01

    The M'Haoudat range, considered as an allochthonous unit amid the strongly metamorphosed Archaean basement (Tiris Group), belongs to the Lower Proterozoic Ijil Group, weakly metamorphosed, constituted mainly by iron quartzites including red jaspers and high grade iron ore. Audio-magnetotelluric (AMT) soundings (frequency range 1-7500 HZ) were performed together with the systematic survey of the range (SNIM mining company). The non-linear least squares method was used to perform a smoothness-constrained data model. The obvious AMT resistivity contrasts between the M'Haoudat Unit (150-3500 ohm. m) and the Archaean basement (20 000 ohm. m) allow to state precisely that the two thrust surfaces, on both sides of the range, join together at a depth which increases from North-West to South-East, as the ore bodies. Inside the steeply dipping M'Haoudat Unit, the main beds of iron quartzites (1500-3500 ohm. m), schists (1000-1500 ohm. m) and hematite ores (150-300 ohm. m) were distinguished when their thickness exceeded 30 to 50 m. The existence of an hydrostatic level (1-50 ohm. m) and the steeply dipping architecture, very likely responsible for the lack of resistivity contrast on the upper part of some profiles, complicate the interpretation at high frequencies the thin layers being poorly defined.

  11. Audio Watermark Embedding Technique Applying Auditory Stream Segregation: "G-encoder Mark" Able to Be Extracted by Mobile Phone

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.

  12. Audio-frequency magnetotelluric imaging of the Hijima fault, Yamasaki fault system, southwest Japan

    NASA Astrophysics Data System (ADS)

    Yamaguchi, S.; Ogawa, Y.; Fuji-Ta, K.; Ujihara, N.; Inokuchi, H.; Oshiman, N.

    2010-04-01

    An audio-frequency magnetotelluric (AMT) survey was undertaken at ten sites along a transect across the Hijima fault, a major segment of the Yamasaki fault system, Japan. The data were subjected to dimensionality analysis, following which two-dimensional inversions for the TE and TM modes were carried out. This model is characterized by (1) a clear resistivity boundary that coincides with the downward projection of the surface trace of the Hijima fault, (2) a resistive zone (>500 Ω m) that corresponds to Mesozoic sediment, and (3) shallow and deep two highly conductive zones (30-40 Ω m) along the fault. The shallow conductive zone is a common feature of the Yamasaki fault system, whereas the deep conductor is a newly discovered feature at depths of 800-1,800 m to the southwest of the fault. The conductor is truncated by the Hijima fault to the northeast, and its upper boundary is the resistive zone. Both conductors are interpreted to represent a combination of clay minerals and a fluid network within a fault-related fracture zone. In terms of the development of the fluid networks, the fault core of the Hijima fault and the highly resistive zone may play important roles as barriers to fluid flow on the northeast and upper sides of the conductive zones, respectively.

  13. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  14. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  15. Theory, implementation and applications of nonstationary Gabor frames

    PubMed Central

    Balazs, P.; Dörfler, M.; Jaillet, F.; Holighaus, N.; Velasco, G.

    2011-01-01

    Signal analysis with classical Gabor frames leads to a fixed time–frequency resolution over the whole time–frequency plane. To overcome the limitations imposed by this rigidity, we propose an extension of Gabor theory that leads to the construction of frames with time–frequency resolution changing over time or frequency. We describe the construction of the resulting nonstationary Gabor frames and give the explicit formula for the canonical dual frame for a particular case, the painless case. We show that wavelet transforms, constant-Q transforms and more general filter banks may be modeled in the framework of nonstationary Gabor frames. Further, we present the results in the finite-dimensional case, which provides a method for implementing the above-mentioned transforms with perfect reconstruction. Finally, we elaborate on two applications of nonstationary Gabor frames in audio signal processing, namely a method for automatic adaptation to transients and an algorithm for an invertible constant-Q transform. PMID:22267893

  16. Calibration of Speed Enforcement Down-The-Road Radars

    PubMed Central

    Jendzurski, John; Paulter, Nicholas G.

    2009-01-01

    We examine the measurement uncertainty associated with different methods of calibrating the ubiquitous down-the-road (DTR) radar used in speed enforcement. These calibration methods include the use of audio frequency sources, tuning forks, a fifth wheel attached to the rear of the vehicle with the radar unit, and the speedometer of the vehicle. We also provide an analysis showing the effect of calibration uncertainty on DTR-radar speed measurement uncertainty. PMID:27504217

  17. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2014-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  18. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2008-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  19. Audio in Courseware: Design Knowledge Issues.

    ERIC Educational Resources Information Center

    Aarntzen, Diana

    1993-01-01

    Considers issues that need to be addressed when incorporating audio in courseware design. Topics discussed include functions of audio in courseware; the relationship between auditive and visual information; learner characteristics in relation to audio; events of instruction; and audio characteristics, including interactivity and speech technology.…

  20. A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Shrum, Richard; Miller, Joel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Our work in virtual reality systems at NASA Ames Research Center includes the area of aurally-guided visual search, using specially-designed audio cues and spatial audio processing (also known as virtual or "3-D audio") techniques (Begault, 1994). Previous studies at Ames had revealed that use of 3-D audio for Traffic Collision Avoidance System (TCAS) advisories significantly reduced head-down time, compared to a head-down map display (0.5 sec advantage) or no display at all (2.2 sec advantage) (Begault, 1993, 1995; Begault & Pittman, 1994; see Wenzel, 1994, for an audio demo). Since the crew must keep their head up and looking out the window as much as possible when taxiing under low-visibility conditions, and the potential for "blunder" is increased under such conditions, it was sensible to evaluate the audio spatial cueing for a prototype audio ground collision avoidance warning (GCAW) system, and a 3-D audio guidance system. Results were favorable for GCAW, but not for the audio guidance system.

  1. The priming function of in-car audio instruction.

    PubMed

    Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh

    2018-05-01

    Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.

  2. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.

  3. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  4. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  5. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  6. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  7. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  8. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  9. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  10. The power of digital audio in interactive instruction: An unexploited medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, J.; Trainor, M.

    1989-01-01

    Widespread use of audio in computer-based training (CBT) occurred with the advent of the interactive videodisc technology. This paper discusses the alternative of digital audio, which, unlike videodisc audio, enables one to rapidly revise the audio used in the CBT and which may be used in nonvideo CBT applications as well. We also discuss techniques used in audio script writing, editing, and production. Results from evaluations indicate a high degree of user satisfaction. 4 refs.

  11. Sonic Simulation of Near Projectile Hits

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Rodemich, E. R.

    1988-01-01

    Measured frequencies identify projectiles and indicate miss distances. Developmental battlefield-simulation system for training soldiers uses sounds emitted by incoming projectiles to identify projectiles and indicate miss distances. Depending on projectile type and closeness of each hit, system generates "kill" or "near-kill" indication. Artillery shell simulated by lightweight plastic projectile launched by compressed air. Flow of air through groove in nose of projectile generates acoustic tone. Each participant carries audio receiver measure and process tone signal. System performs fast Fourier transforms of received tone to obtain dominant frequency during each succeeding interval of approximately 40 ms (an interval determined from practical signal-processing requirements). With modifications, system concept applicable to collision-warning or collision-avoidance systems.

  12. Imaging cochlear soft tissue displacement with coherent x-rays

    NASA Astrophysics Data System (ADS)

    Rau, Christoph; Richter, Claus-Peter

    2015-10-01

    At present, imaging of cochlear mechanics at mid-cochlear turns has not been accomplished. Although challenging, this appears possible with partially coherent hard x-rays. The present study shows results from stroboscopic x-ray imaging of a test object at audio frequencies. The vibration amplitudes were quantified. In a different set of experiments, an intact and calcified gerbil temporal bone was used to determine displacements of the reticular lamina, tectorial membrane, and Reissner’s membrane with the Lucas and Kanade video flow algorithm. The experiments validated high frequency x-ray imaging and imaging in a calcified cochlea. The present work is key for future imaging of cochlear micromechanics at a high spatial resolution.

  13. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  14. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  15. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  16. Communicative Competence in Audio Classrooms: A Position Paper for the CADE 1991 Conference.

    ERIC Educational Resources Information Center

    Burge, Liz

    Classroom practitioners need to move their attention away from the technological and logistical competencies required for audio conferencing (AC) to the required communicative competencies in order to advance their skills in handling the psychodynamics of audio virtual classrooms which include audio alone and audio with graphics. While the…

  17. The Audio Description as a Physics Teaching Tool

    ERIC Educational Resources Information Center

    Cozendey, Sabrina; Costa, Maria da Piedade

    2016-01-01

    This study analyses the use of audio description in teaching physics concepts, aiming to determine the variables that influence the understanding of the concept. One education resource was audio described. For make the audio description the screen was freezing. The video with and without audio description should be presented to students, so that…

  18. Drooling in Parkinson's disease: a novel tool for assessment of swallow frequency.

    PubMed

    Marks, L; Weinreich, J

    2001-01-01

    A non-invasive way to obtain objective measurements of swallowing frequency and thus indirectly, drooling was required as part of the study 'Drooling in Parkinson's disease: objective measurement and response to therapy'. A hard disk, digital recorder was developed, for use on a laptop computer, which was capable of collecting large quantities of swallowing data from an anticipated 40 patients and 10 controls. An electric microphone was taped to the subjects' larynx for recording the swallow sounds when drinking 150 ml of water and at rest for 30 minutes. The software provides an accurate visual display of the audio-signal allowing the researcher easy access to any segment of the recording and to mark and extract the swallow events, so that swallow frequency may be efficiently and accurately ascertained. Preliminary results are presented.

  19. Compact sub-kilohertz low-frequency quantum light source based on four-wave mixing in cesium vapor

    NASA Astrophysics Data System (ADS)

    Ma, Rong; Liu, Wei; Qin, Zhongzhong; Su, Xiaolong; Jia, Xiaojun; Zhang, Junxiang; Gao, Jiangrui

    2018-03-01

    Using a nondegenerate four-wave mixing (FWM) process based on a double-{\\Lambda} scheme in hot cesium vapor, we demonstrate a compact diode-laser-pumped quantum light source for the generation of quantum correlated twin beams with a maximum squeezing of 6.5 dB. The squeezing is observed at a Fourier frequency in the audio band down to 0.7 kHz which, to the best of our knowledge, is the first observation of sub-kilohertz intensity-difference squeezing in an atomic system so far. A phase-matching condition is also investigated in our system, which confirms the spatial-multi-mode characteristics of the FWM process. Our compact low-frequency squeezed light source may find applications in quantum imaging, quantum metrology, and the transfer of optical squeezing onto a matter wave.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grisi, Marco, E-mail: marco.grisi@epfl.ch; Gualco, Gabriele; Boero, Giovanni

    In this article, we present an integrated broadband complementary metal-oxide semiconductor single-chip transceiver suitable for the realization of multi-nuclear pulsed nuclear magnetic resonance (NMR) probes. The realized single-chip transceiver can be interfaced with on-chip integrated microcoils or external LC resonators operating in the range from 1 MHz to 1 GHz. The dimension of the chip is about 1 mm{sup 2}. It consists of a radio-frequency (RF) power amplifier, a low-noise RF preamplifier, a frequency mixer, an audio-frequency amplifier, and fully integrated transmit-receive switches. As specific example, we show its use for multi-nuclear NMR spectroscopy. With an integrated coil of aboutmore » 150 μm external diameter, a {sup 1}H spin sensitivity of about 1.5 × 10{sup 13} spins/Hz{sup 1/2} is achieved at 7 T.« less

  1. Evaluation of Specialized Photoacoustic Absorption Chambers for Near-Millimeter Wave (NMMW) Propagation Measurements.

    DTIC Science & Technology

    1980-08-01

    an audio oscillator , speaker, frequency counter, and oscilloscope the spheres could be driven into resonance. This procedure was first done for the...cavity, some of the electromagnetic energy is absorbed by an absorbing media. Heating of the gas occurs with the resultant pressure change creating an...acoustic wave. Due to the double open-ended organ pipe design, a pressure maximum occurs midway down the cavity. Because of the symetric placement of the

  2. Automated Assessment of Child Vocalization Development Using LENA.

    PubMed

    Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-07-12

    To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.

  3. A new higher performance NGO satellite for direct audio/video broadcast

    NASA Astrophysics Data System (ADS)

    Briskman, Robert D.; Foust, Joseph V.

    2010-03-01

    A three satellite constellation using non-geostationary orbits (NGO) was launched in the latter half of 2000. It is providing direct satellite broadcasting audio and video services to over 9 million mobile and fixed subscribers throughout North America. The constellation will be augmented with a geostationary satellite called FM-5 in 2009, providing increased availability to the user with this "Hybrid" constellation. Effort has recently started on replacement satellites for the original NGO satellites, the first one called FM-6. This new satellite will be placed in a different orbital plane from the original ones providing a constellation which brings further operational improvements. The paper describes the new satellite which has twice the prime and radio frequency (RF) power than the original and a 9 m diameter aperture transmit antenna whose shaped antenna beam delivers much higher effective isotropic radiated power (EIRP). Other technology advances used in the satellite such as electric propulsion, precision star sensors, and enhanced performing lithium-ion batteries are also described in the paper.

  4. Frequency allocations for a new satellite service - Digital audio broadcasting

    NASA Technical Reports Server (NTRS)

    Reinhart, Edward E.

    1992-01-01

    The allocation in the range 500-3000 MHz for digital audio broadcasting (DAB) is described in terms of key issues such as the transmission-system architectures. Attention is given to the optimal amount of spectrum for allocation and the technological considerations relevant to downlink bands for satellite and terrestrial transmissions. Proposals for DAB allocations are compared, and reference is made to factors impinging on the provision of ground/satellite feeder links. The allocation proposals describe the implementation of 50-60-MHz bandwidths for broadcasting in the ranges near 800 MHz, below 1525 MHz, near 2350 MHz, and near 2600 MHz. Three specific proposals are examined in terms of characteristics such as service areas, coverage/beam, channels/satellite beam, and FCC license status. Several existing problems are identified including existing services crowded with systems, the need for new bands in the 1000-3000-MHz range, and variations in the nature and intensity of implementations of existing allocations that vary from country to country.

  5. Research on the forward modeling of controlled-source audio-frequency magnetotellurics in three-dimensional axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong

    2017-11-01

    Controlled-source audio-frequency magnetotellurics (CSAMT) has developed rapidly in recent years and are widely used in the area of mineral and oil resource exploration as well as other fields. The current theory, numerical simulation, and inversion research are based on the assumption that the underground media have resistivity isotropy. However a large number of rock and mineral physical property tests show the resistivity of underground media is generally anisotropic. With the increasing application of CSAMT, the demand for probe accuracy of practical exploration to complex targets continues to increase. The question of how to evaluate the influence of anisotropic resistivity to CSAMT response is becoming important. To meet the demand for CSAMT response research of resistivity anisotropic media, this paper examines the CSAMT electric equations, derives and realizes a three-dimensional (3D) staggered-grid finite difference numerical simulation method of CSAMT resistivity axial anisotropy. Through building a two-dimensional (2D) resistivity anisotropy geoelectric model, we validate the 3D computation result by comparing it to the result of controlled-source electromagnetic method (CSEM) resistivity anisotropy 2D finite element program. Through simulating a 3D resistivity axial anisotropy geoelectric model, we compare and analyze the responses of equatorial configuration, axial configuration, two oblique sources and tensor source. The research shows that the tensor source is suitable for CSAMT to recognize the anisotropic effect of underground structure.

  6. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  7. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  8. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  9. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  10. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  11. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  12. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  13. Comparing Audio and Video Data for Rating Communication

    PubMed Central

    Williams, Kristine; Herman, Ruth; Bontempo, Daniel

    2013-01-01

    Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with ICC (2,1) for audio = .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio recorded data should be evaluated in designing studies evaluating nursing care. PMID:23579475

  14. Frequency-specific attentional modulation in human primary auditory cortex and midbrain.

    PubMed

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2018-07-01

    Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Predicting the Overall Spatial Quality of Automotive Audio Systems

    NASA Astrophysics Data System (ADS)

    Koya, Daisuke

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.

  16. Exploring the Implementation of Steganography Protocols on Quantum Audio Signals

    NASA Astrophysics Data System (ADS)

    Chen, Kehan; Yan, Fei; Iliyasu, Abdullah M.; Zhao, Jianping

    2018-02-01

    Two quantum audio steganography (QAS) protocols are proposed, each of which manipulates or modifies the least significant qubit (LSQb) of the host quantum audio signal that is encoded as an FRQA (flexible representation of quantum audio) audio content. The first protocol (i.e. the conventional LSQb QAS protocol or simply the cLSQ stego protocol) is built on the exchanges between qubits encoding the quantum audio message and the LSQb of the amplitude information in the host quantum audio samples. In the second protocol, the embedding procedure to realize it implants information from a quantum audio message deep into the constraint-imposed most significant qubit (MSQb) of the host quantum audio samples, we refer to it as the pseudo MSQb QAS protocol or simply the pMSQ stego protocol. The cLSQ stego protocol is designed to guarantee high imperceptibility between the host quantum audio and its stego version, whereas the pMSQ stego protocol ensures that the resulting stego quantum audio signal is better immune to illicit tampering and copyright violations (a.k.a. robustness). Built on the circuit model of quantum computation, the circuit networks to execute the embedding and extraction algorithms of both QAS protocols are determined and simulation-based experiments are conducted to demonstrate their implementation. Outcomes attest that both protocols offer promising trade-offs in terms of imperceptibility and robustness.

  17. Comparing audio and video data for rating communication.

    PubMed

    Williams, Kristine; Herman, Ruth; Bontempo, Daniel

    2013-09-01

    Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with Interclass Correlation Coefficient (ICC) (2,1) for audio .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio-recorded data should be evaluated in designing studies evaluating nursing care.

  18. Reducing audio stimulus presentation latencies across studies, laboratories, and hardware and operating system configurations.

    PubMed

    Babjack, Destiny L; Cernicky, Brandon; Sobotka, Andrew J; Basler, Lee; Struthers, Devon; Kisic, Richard; Barone, Kimberly; Zuccolotto, Anthony P

    2015-09-01

    Using differing computer platforms and audio output devices to deliver audio stimuli often introduces (1) substantial variability across labs and (2) variable time between the intended and actual sound delivery (the sound onset latency). Fast, accurate audio onset latencies are particularly important when audio stimuli need to be delivered precisely as part of studies that depend on accurate timing (e.g., electroencephalographic, event-related potential, or multimodal studies), or in multisite studies in which standardization and strict control over the computer platforms used is not feasible. This research describes the variability introduced by using differing configurations and introduces a novel approach to minimizing audio sound latency and variability. A stimulus presentation and latency assessment approach is presented using E-Prime and Chronos (a new multifunction, USB-based data presentation and collection device). The present approach reliably delivers audio stimuli with low latencies that vary by ≤1 ms, independent of hardware and Windows operating system (OS)/driver combinations. The Chronos audio subsystem adopts a buffering, aborting, querying, and remixing approach to the delivery of audio, to achieve a consistent 1-ms sound onset latency for single-sound delivery, and precise delivery of multiple sounds that achieves standard deviations of 1/10th of a millisecond without the use of advanced scripting. Chronos's sound onset latencies are small, reliable, and consistent across systems. Testing of standard audio delivery devices and configurations highlights the need for careful attention to consistency between labs, experiments, and multiple study sites in their hardware choices, OS selections, and adoption of audio delivery systems designed to sidestep the audio latency variability issue.

  19. Revealing the ecological content of long-duration audio-recordings of the environment through clustering and visualisation.

    PubMed

    Phillips, Yvonne F; Towsey, Michael; Roe, Paul

    2018-01-01

    Audio recordings of the environment are an increasingly important technique to monitor biodiversity and ecosystem function. While the acquisition of long-duration recordings is becoming easier and cheaper, the analysis and interpretation of that audio remains a significant research area. The issue addressed in this paper is the automated reduction of environmental audio data to facilitate ecological investigations. We describe a method that first reduces environmental audio to vectors of acoustic indices, which are then clustered. This can reduce the audio data by six to eight orders of magnitude yet retain useful ecological information. We describe techniques to visualise sequences of cluster occurrence (using for example, diel plots, rose plots) that assist interpretation of environmental audio. Colour coding acoustic clusters allows months and years of audio data to be visualised in a single image. These techniques are useful in identifying and indexing the contents of long-duration audio recordings. They could also play an important role in monitoring long-term changes in species abundance brought about by habitat degradation and/or restoration.

  20. Revealing the ecological content of long-duration audio-recordings of the environment through clustering and visualisation

    PubMed Central

    Towsey, Michael; Roe, Paul

    2018-01-01

    Audio recordings of the environment are an increasingly important technique to monitor biodiversity and ecosystem function. While the acquisition of long-duration recordings is becoming easier and cheaper, the analysis and interpretation of that audio remains a significant research area. The issue addressed in this paper is the automated reduction of environmental audio data to facilitate ecological investigations. We describe a method that first reduces environmental audio to vectors of acoustic indices, which are then clustered. This can reduce the audio data by six to eight orders of magnitude yet retain useful ecological information. We describe techniques to visualise sequences of cluster occurrence (using for example, diel plots, rose plots) that assist interpretation of environmental audio. Colour coding acoustic clusters allows months and years of audio data to be visualised in a single image. These techniques are useful in identifying and indexing the contents of long-duration audio recordings. They could also play an important role in monitoring long-term changes in species abundance brought about by habitat degradation and/or restoration. PMID:29494629

  1. Vibration sensing method and apparatus

    DOEpatents

    Barna, B.A.

    1989-04-25

    A method and apparatus for nondestructive evaluation of a structure are disclosed. Resonant audio frequency vibrations are excited in the structure to be evaluated and the vibrations are measured and characterized to obtain information about the structure. The vibrations are measured and characterized by reflecting a laser beam from the vibrating structure and directing a substantial portion of the reflected beam back into the laser device used to produce the beam which device is capable of producing an electric signal containing information about the vibration. 4 figs.

  2. Vibration sensing method and apparatus

    DOEpatents

    Barna, B.A.

    1987-07-07

    A method and apparatus for nondestructive evaluation of a structure is disclosed. Resonant audio frequency vibrations are excited in the structure to be evaluated and the vibrations are measured and characterized to obtain information about the structure. The vibrations are measured and characterized by reflecting a laser beam from the vibrating structure and directing a substantial portion of the reflected beam back into the laser device used to produce the beam which device is capable of producing an electric signal containing information about the vibration. 4 figs.

  3. Free oscilloscope web app using a computer mic, built-in sound library, or your own files

    NASA Astrophysics Data System (ADS)

    Ball, Edward; Ruiz, Frances; Ruiz, Michael J.

    2017-07-01

    We have developed an online oscilloscope program which allows users to see waveforms by utilizing their computer microphones, selecting from our library of over 30 audio files, and opening any *.mp3 or *.wav file on their computers. The oscilloscope displays real-time signals against time. The oscilloscope has been calibrated so one can make accurate frequency measurements of periodic waves to within 1%. The web app is ideal for computer projection in class.

  4. Development of Simulated Directional Audio for Cockpit Applications

    DTIC Science & Technology

    1986-01-01

    011 ’rhoodore JT.. (2erth, Jeffrev M.. Enpolivinnn.,Will1am R. and Folds, Deennis J. 12a. T’fP6 OP REPORT I131. TIME QW1COVEREDT’PRPOT(r. oDO 5 AG ON...of the aludio si~nal, in the time and frequency domains, which enhance localization performance with simulated cues. Previous research is reviewed...dichotically. Localization accuracy and response time were compared for: (1) nine different filtered noise stimuli, designed to make available some

  5. Principles of signal conditioning.

    PubMed

    Finkel, A; Bookman, R

    2001-05-01

    It is rare for biological, physiological, chemical, electrical, or physical signals to be measured in the appropriate format for recording and interpretation. Usually, a signal must be conditioned to optimize it for both of these functions. This overview describes the fundamentals of signal filtering, how to prepare signals for A/D conversion, signal averaging to increase the signal-to-noise ratio, line frequency pickup (hum), peak-to-peak and rms noise measurements, blanking, audio monitoring, testing of electrodes and the common-mode rejection ratio.

  6. Vibration sensing method and apparatus

    DOEpatents

    Barna, Basil A.

    1989-04-25

    A method and apparatus for nondestructive evaluation of a structure is disclosed. Resonant audio frequency vibrations are excited in the structure to be evaluated and the vibrations are measured and characterized to obtain information about the structure. The vibrations are measured and characterized by reflecting a laser beam from the vibrating structure and directing a substantial portion of the reflected beam back into the laser device used to produce the beam which device is capable of producing an electric signal containing information about the vibration.

  7. Holographic disk with high data transfer rate: its application to an audio response memory.

    PubMed

    Kubota, K; Ono, Y; Kondo, M; Sugama, S; Nishida, N; Sakaguchi, M

    1980-03-15

    This paper describes a memory realized with a high data transfer rate using the holographic parallel-processing function and its application to an audio response system that supplies many audio messages to many terminals simultaneously. Digitalized audio messages are recorded as tiny 1-D Fourier transform holograms on a holographic disk. A hologram recorder and a hologram reader were constructed to test and demonstrate the holographic audio response memory feasibility. Experimental results indicate the potentiality of an audio response system with a 2000-word vocabulary and 250-Mbit/sec bit transfer rate.

  8. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  9. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Portable audio magnetotellurics - experimental measurements and joint inversion with radiomagnetotelluric data from Gotland, Sweden

    NASA Astrophysics Data System (ADS)

    Shan, Chunling; Kalscheuer, Thomas; Pedersen, Laust B.; Erlström, Mikael; Persson, Lena

    2017-08-01

    Field setup of an audio magnetotelluric (AMT) station is a very time consuming and heavy work load. In contrast, radio magnetotelluric (RMT) equipment is more portable and faster to deploy but has shallower investigation depth owing to its higher signal frequencies. To increase the efficiency in the acquisition of AMT data from 10 to 300 Hz, we introduce a modification of the AMT method, called portable audio magnetotellurics (PAMT), that uses a lighter AMT field system and (owing to the disregard of signals at frequencies of less than 10 Hz) shortened data acquisition time. PAMT uses three magnetometers pre-mounted on a rigid frame to measure magnetic fields and steel electrodes to measure electric fields. Field tests proved that the system is stable enough to measure AMT fields in the given frequency range. A PAMT test measurement was carried out on Gotland, Sweden along a 3.5 km profile to study the ground conductivity and to map shallow Silurian marlstone and limestone formations, deeper Silurian, Ordovician and Cambrian sedimentary structures and crystalline basement. RMT data collected along a coincident profile and regional airborne very low frequency (VLF) data support the interpretation of our PAMT data. While only the RMT and VLF data constrain a shallow ( 20-50 m deep) transition between Silurian conductive (< 30 Ωm resistivity) marlstone and resistive (> 1000 Ωm resistivity) limestone, the single-method inversion models of both the PAMT and the RMT data show a transition into a conductive layer of 3 to 30 Ωm resistivity at 80 m depth suggesting the compatibility of the two data sets. This conductive layer is interpreted as saltwater saturated succession of Silurian, Ordovician and Cambrian sedimentary units. Towards the lower boundary of this succession (at 600 m depth according to boreholes), only the PAMT data constrain the structure. As supported by modelling tests and sensitivity analysis, the PAMT data only contain a vague indication of the underlying crystalline basement. A PAMT and RMT joint inversion model reveals all the aforementioned units including the less than 80 m deep limestone and marlstone formations and the conductive sedimentary succession of Silurian, Ordovician and Cambrian units. Our test measurements have proven the PAMT modification to be time saving and easy to set up. However, PAMT data suffer from the same noise disturbances as regular AMT data. Since man-made EM noise can propagate over great distances through resistive underground, PAMT measurements are recommended to be carried out in areas with low resistivity. The PAMT method is proven to be applicable in shallow depth studies, especially in areas where normal AMT measurements are inconvenient and/or too expensive to carry out.

  11. 78 FR 38093 - Seventh Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-25

    ... Committee 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... 226, Audio Systems and Equipment [[Page 38094

  12. Diagnostic accuracy of sleep bruxism scoring in absence of audio-video recording: a pilot study.

    PubMed

    Carra, Maria Clotilde; Huynh, Nelly; Lavigne, Gilles J

    2015-03-01

    Based on the most recent polysomnographic (PSG) research diagnostic criteria, sleep bruxism is diagnosed when >2 rhythmic masticatory muscle activity (RMMA)/h of sleep are scored on the masseter and/or temporalis muscles. These criteria have not yet been validated for portable PSG systems. This pilot study aimed to assess the diagnostic accuracy of scoring sleep bruxism in absence of audio-video recordings. Ten subjects (mean age 24.7 ± 2.2) with a clinical diagnosis of sleep bruxism spent one night in the sleep laboratory. PSG were performed with a portable system (type 2) while audio-video was recorded. Sleep studies were scored by the same examiner three times: (1) without, (2) with, and (3) without audio-video in order to test the intra-scoring and intra-examiner reliability for RMMA scoring. The RMMA event-by-event concordance rate between scoring without audio-video and with audio-video was 68.3 %. Overall, the RMMA index was overestimated by 23.8 % without audio-video. However, the intra-class correlation coefficient (ICC) between scorings with and without audio-video was good (ICC = 0.91; p < 0.001); the intra-examiner reliability was high (ICC = 0.97; p < 0.001). The clinical diagnosis of sleep bruxism was confirmed in 8/10 subjects based on scoring without audio-video and in 6/10 subjects with audio-video. Although the absence of audio-video recording, the diagnostic accuracy of assessing RMMA with portable PSG systems appeared to remain good, supporting their use for both research and clinical purposes. However, the risk of moderate overestimation in absence of audio-video must be taken into account.

  13. Magnetic field dependent atomic tunneling in non-magnetic glasses

    NASA Astrophysics Data System (ADS)

    Ludwig, S.; Enss, C.; Hunklinger, S.

    2003-05-01

    The low-temperature properties of insulating glasses are governed by atomic tunneling systems (TSs). Recently, strong magnetic field effects in the dielectric susceptibility have been discovered in glasses at audio frequencies at very low temperatures. Moreover, it has been found that the amplitude of two-pulse polarization echoes generated in non-magnetic multi-component glasses at radio frequencies and at very low temperatures shows a surprising non-monotonic magnetic field dependence. The magnitude of the latter effect indicates that virtually all TSs are affected by the magnetic field, not only a small subset of systems. We have studied the variation of the magnetic field dependence of the echo amplitude as a function of the delay time between the two excitation pulses and at different frequencies. Our results indicate that the evolution of the phase of resonant TSs is changed by the magnetic field.

  14. Helium gas purity monitor based on low frequency acoustic resonance

    NASA Astrophysics Data System (ADS)

    Kasthurirengan, S.; Jacob, S.; Karunanithi, R.; Karthikeyan, A.

    1996-05-01

    Monitoring gas purity is an important aspect of gas recovery stations where air is usually one of the major impurities. Purity monitors of Katherometric type are commercially available for this purpose. Alternatively, we discuss here a helium gas purity monitor based on acoustic resonance of a cavity at audio frequencies. It measures the purity by monitoring the resonant frequency of a cylindrical cavity filled with the gas under test and excited by conventional telephone transducers fixed at the ends. The use of the latter simplifies the design considerably. The paper discusses the details of the resonant cavity and the electronic circuit along with temperature compensation. The unit has been calibrated with helium gas of known purities. The unit has a response time of the order of 10 minutes and measures the gas purity to an accuracy of 0.02%. The unit has been installed in our helium recovery system and is found to perform satisfactorily.

  15. Gas mixing enhanced by power modulations in atmospheric pressure microwave plasma jet

    NASA Astrophysics Data System (ADS)

    Voráč, J.; Potočňáková, L.; Synek, P.; Hnilica, J.; Kudrle, V.

    2016-04-01

    Microwave plasma jet operating in atmospheric pressure argon was power modulated by audio frequency sine envelope in the 102 W power range. Its effluent was imaged using interference filters and ICCD camera for several different phases of the modulating signal. The combination of this fast imaging with spatially resolved optical emission spectroscopy provides useful insights into the plasmachemical processes involved. Phase-resolved schlieren photography was performed to visualize the gas dynamics. The results show that for higher modulation frequencies the plasma chemistry is strongly influenced by formation of transient flow perturbation resembling a vortex during each period. The perturbation formation and speed are strongly influenced by the frequency and power variations while they depend only weakly on the working gas flow rate. From application point of view, the perturbation presence significantly broadened lateral distribution of active species, effectively increasing cross-sectional area suitable for applications.

  16. Radio frequency analog electronics based on carbon nanotube transistors

    PubMed Central

    Kocabas, Coskun; Kim, Hoon-sik; Banks, Tony; Rogers, John A.; Pesetski, Aaron A.; Baumgardner, James E.; Krishnaswamy, S. V.; Zhang, Hong

    2008-01-01

    The potential to exploit single-walled carbon nanotubes (SWNTs) in advanced electronics represents a continuing, major source of interest in these materials. However, scalable integration of SWNTs into circuits is challenging because of difficulties in controlling the geometries, spatial positions, and electronic properties of individual tubes. We have implemented solutions to some of these challenges to yield radio frequency (RF) SWNT analog electronic devices, such as narrow band amplifiers operating in the VHF frequency band with power gains as high as 14 dB. As a demonstration, we fabricated nanotube transistor radios, in which SWNT devices provide all of the key functions, including resonant antennas, fixed RF amplifiers, RF mixers, and audio amplifiers. These results represent important first steps to practical implementation of SWNTs in high-speed analog circuits. Comparison studies indicate certain performance advantages over silicon and capabilities that complement those in existing compound semiconductor technologies. PMID:18227509

  17. Field test of electromagnetic geophysical techniques for locating simulated in situ mining leach solution. Report of investigations/1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweeton, D.R.; Hanson, J.C.; Friedel, M.J.

    1994-01-01

    The U.S. Bureau of Mines, the University of Arizona, Sandia National Laboratory, and Zonge Engineering and Research, Inc., conducted cooperative field tests of six electromagnetic geophysical methods to compare their effectiveness in locating a brine solution simulating in situ leach solution or a high-conductivity plume of contamination. The brine was approximately 160 meters below the surface. The test site was the University's San Xavier experimental mine near Tucson, Arizona. Geophysical surveys using surface and surface-borehole time-domain electromagnetics (TEM), surface controlled source audio-frequency magnetotellurics (CSAMT), surface-borehole frequency-domain electromagnetics (FEM), crosshole FEM and surface magnetic field ellipticity were conducted before and duringmore » brine injection.« less

  18. Active cooling of an audio-frequency electrical resonator to microkelvin temperatures

    NASA Astrophysics Data System (ADS)

    Vinante, A.; Bonaldi, M.; Mezzena, R.; Falferi, P.

    2010-11-01

    We have cooled a macroscopic LC electrical resonator using feedback-cooling combined with an ultrasensitive dc Superconducting Quantum Interference Device (SQUID) current amplifier. The resonator, with resonance frequency of 11.5 kHz and bath temperature of 135 mK, is operated in the high coupling limit so that the SQUID back-action noise overcomes the intrinsic resonator thermal noise. The effect of correlations between the amplifier noise sources clearly show up in the experimental data, as well as the interplay of the amplifier noise with the resonator thermal noise. The lowest temperature achieved by feedback is 14 μK, corresponding to 26 resonator photons, and approaches the limit imposed by the noise energy of the SQUID amplifier.

  19. The study of electrical conduction mechanisms. [dielectric response of lunar fines

    NASA Technical Reports Server (NTRS)

    Morrison, H. F.

    1974-01-01

    The dielectric response of lunar fines 74241,2 is presented in the audio-frequency range and under lunarlike conditions. Results suggest that volatiles are released during storage and transport of the lunar sample. Apparently, subsequent absorption of volatiles on the sample surface alter its dielectric response. The assumed volatile influence disappear after evacuation. A comparison of the dielectric properties of lunar and terrestrial materials as a function of density, temperature, and frequency indicates that if the lunar simulator analyzed were completely devoid of atmospheric moisture it would present dielectric losses smaller than those of the lunar sample. It is concluded that density prevails over temperature as the controlling factor of dielectric permittivity in the lunar regolith and that dielectric losses vary slowly with depth.

  20. Audio-vocal system regulation in children with autism spectrum disorders.

    PubMed

    Russo, Nicole; Larson, Charles; Kraus, Nina

    2008-06-01

    Do children with autism spectrum disorders (ASD) respond similarly to perturbations in auditory feedback as typically developing (TD) children? Presentation of pitch-shifted voice auditory feedback to vocalizing participants reveals a close coupling between the processing of auditory feedback and vocal motor control. This paradigm was used to test the hypothesis that abnormalities in the audio-vocal system would negatively impact ASD compensatory responses to perturbed auditory feedback. Voice fundamental frequency (F(0)) was measured while children produced an /a/ sound into a microphone. The voice signal was fed back to the subjects in real time through headphones. During production, the feedback was pitch shifted (-100 cents, 200 ms) at random intervals for 80 trials. Averaged voice F(0) responses to pitch-shifted stimuli were calculated and correlated with both mental and language abilities as tested via standardized tests. A subset of children with ASD produced larger responses to perturbed auditory feedback than TD children, while the other children with ASD produced significantly lower response magnitudes. Furthermore, robust relationships between language ability, response magnitude and time of peak magnitude were identified. Because auditory feedback helps to stabilize voice F(0) (a major acoustic cue of prosody) and individuals with ASD have problems with prosody, this study identified potential mechanisms of dysfunction in the audio-vocal system for voice pitch regulation in some children with ASD. Objectively quantifying this deficit may inform both the assessment of a subgroup of ASD children with prosody deficits, as well as remediation strategies that incorporate pitch training.

  1. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... programming stream at no direct charge to listeners. In addition, a broadcast radio station must simulcast its analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The emergency...

  2. High-Fidelity Piezoelectric Audio Device

    NASA Technical Reports Server (NTRS)

    Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.

    2003-01-01

    ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.

  3. Glottal open quotient in singing: Measurements and correlation with laryngeal mechanisms, vocal intensity, and fundamental frequency

    NASA Astrophysics Data System (ADS)

    Henrich, Nathalie; D'Alessandro, Christophe; Doval, Boris; Castellengo, Michèle

    2005-03-01

    This article presents the results of glottal open-quotient measurements in the case of singing voice production. It explores the relationship between open quotient and laryngeal mechanisms, vocal intensity, and fundamental frequency. The audio and electroglottographic signals of 18 classically trained male and female singers were recorded and analyzed with regard to vocal intensity, fundamental frequency, and open quotient. Fundamental frequency and open quotient are derived from the differentiated electroglottographic signal, using the DECOM (DEgg Correlation-based Open quotient Measurement) method. As male and female phonation may differ in respect to vocal-fold vibratory properties, a distinction is made between two different glottal configurations, which are called laryngeal mechanisms: mechanism 1 (related to chest, modal, and male head register) and mechanism 2 (related to falsetto for male and head register for female). The results show that open quotient depends on the laryngeal mechanisms. It ranges from 0.3 to 0.8 in mechanism 1 and from 0.5 to 0.95 in mechanism 2. The open quotient is strongly related to vocal intensity in mechanism 1 and to fundamental frequency in mechanism 2. .

  4. Fade Measurements into Buildings from 500 to 3000 MHz

    NASA Technical Reports Server (NTRS)

    Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1996-01-01

    Slant-path fade measurements from 500 to 3000 MHz were made into six different buildings employing a vector network analyzer, a tower-mounted transmitting antenna and an automatically positioned receiving antenna. The objective of the measurements was to provide information for satellite audio broadcasting and personal communications satellite design on the correlation of fading inside buildings. Fades were measured with 5 cm spatial separation and every 0.2 percent of the frequency. Median fades ranged from 10 to 20 dB in woodframe houses with metal roofs and walls without and with an aluminum heat shield, respectively. The median decorrelation distance was from 0.5 to 1.1. m and was independent of frequency. The attenuation into the buildings increased only moderately with frequency in most of the buildings with a median slope of about 1 to 3 db/GHz, but increased fastest in the least attenuating building with a slope of 5 dB/GHz. The median decorrelation bandwidth ranged from 1.2 to 3.8 percent of frequency in five of the buildings, and was largest in the least attenuating building, with 20.2 percent of frequency.

  5. Fade Measurements into Buildings from 500 to 3000 MHz

    NASA Technical Reports Server (NTRS)

    Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1996-01-01

    Slant-path fade measurements from 500 to 3000 MHz were made into six different buildings employing a vector network analyzer, a tower-mounted transmitting antenna and an automatically positioned receiving antenna. The objective of the measurements was to provide information for satellite audio broadcasting and personal communications satellite design on the correlation of fading inside buildings. Fades were measured with 5 cm spatial separation and every 0.2% of the frequency. Median fades ranged from 10 to 20 dB in woodframe houses with metal roofs and walls without and with an aluminum heatshield, respectively. The median decorrelation distance was from 0.5 to 1.1 m and was independent of frequency. The attenuation into the buildings increased only moderately with frequency in most of the buildings with a median slope of about 1 to 3 dB/GHz, but increased fastest in the least attenuating building with a slope of 5 dB/GHz. The median decorrelation bandwidth ranged from 1.2 to 3.8% of frequency in five of the buildings, and was largest in the least attenuating building, with 20.2% of frequency.

  6. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.

  7. Validation of a digital audio recording method for the objective assessment of cough in the horse.

    PubMed

    Duz, M; Whittaker, A G; Love, S; Parkin, T D H; Hughes, K J

    2010-10-01

    To validate the use of digital audio recording and analysis for quantification of coughing in horses. Part A: Nine simultaneous digital audio and video recordings were collected individually from seven stabled horses over a 1 h period using a digital audio recorder attached to the halter. Audio files were analysed using audio analysis software. Video and audio recordings were analysed for cough count and timing by two blinded operators on two occasions using a randomised study design for determination of intra-operator and inter-operator agreement. Part B: Seventy-eight hours of audio recordings obtained from nine horses were analysed once by two blinded operators to assess inter-operator repeatability on a larger sample. Part A: There was complete agreement between audio and video analyses and inter- and intra-operator analyses. Part B: There was >97% agreement between operators on number and timing of 727 coughs recorded over 78 h. The results of this study suggest that the cough monitor methodology used has excellent sensitivity and specificity for the objective assessment of cough in horses and intra- and inter-operator variability of recorded coughs is minimal. Crown Copyright 2010. Published by Elsevier India Pvt Ltd. All rights reserved.

  8. 47 CFR 73.9005 - Compliance requirements for covered demodulator products: Audio.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... products: Audio. 73.9005 Section 73.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED....9005 Compliance requirements for covered demodulator products: Audio. Except as otherwise provided in §§ 73.9003(a) or 73.9004(a), covered demodulator products shall not output the audio portions of...

  9. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  10. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  11. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  12. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  13. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  14. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  15. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 9 2012-10-01 2012-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  16. 47 CFR 87.483 - Audio visual warning systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Audio visual warning systems. 87.483 Section 87... AVIATION SERVICES Stations in the Radiodetermination Service § 87.483 Audio visual warning systems. An audio visual warning system (AVWS) is a radar-based obstacle avoidance system. AVWS activates...

  17. Semantic Context Detection Using Audio Event Fusion

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Ta; Cheng, Wen-Huang; Wu, Ja-Ling

    2006-12-01

    Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  18. Effect of Audio Coaching on Correlation of Abdominal Displacement With Lung Tumor Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, Mitsuhiro; Narita, Yuichiro; Matsuo, Yukinori

    2009-10-01

    Purpose: To assess the effect of audio coaching on the time-dependent behavior of the correlation between abdominal motion and lung tumor motion and the corresponding lung tumor position mismatches. Methods and Materials: Six patients who had a lung tumor with a motion range >8 mm were enrolled in the present study. Breathing-synchronized fluoroscopy was performed initially without audio coaching, followed by fluoroscopy with recorded audio coaching for multiple days. Two different measurements, anteroposterior abdominal displacement using the real-time positioning management system and superoinferior (SI) lung tumor motion by X-ray fluoroscopy, were performed simultaneously. Their sequential images were recorded using onemore » display system. The lung tumor position was automatically detected with a template matching technique. The relationship between the abdominal and lung tumor motion was analyzed with and without audio coaching. Results: The mean SI tumor displacement was 10.4 mm without audio coaching and increased to 23.0 mm with audio coaching (p < .01). The correlation coefficients ranged from 0.89 to 0.97 with free breathing. Applying audio coaching, the correlation coefficients improved significantly (range, 0.93-0.99; p < .01), and the SI lung tumor position mismatches became larger in 75% of all sessions. Conclusion: Audio coaching served to increase the degree of correlation and make it more reproducible. In addition, the phase shifts between tumor motion and abdominal displacement were improved; however, all patients breathed more deeply, and the SI lung tumor position mismatches became slightly larger with audio coaching than without audio coaching.« less

  19. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  20. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  1. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  2. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  3. 36 CFR § 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Audio disturbances. § 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  4. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  5. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  6. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  7. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  8. ENERGY STAR Certified Audio Video

    EPA Pesticide Factsheets

    Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of May 1, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/index.cfm?c=audio_dvd.pr_crit_audio_dvd

  9. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  10. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  11. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  12. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  13. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  14. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  15. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  16. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  17. Can one ``Hear'' the aggregation state of a granular system?

    NASA Astrophysics Data System (ADS)

    Kruelle, Christof A.; Sánchez, Almudena García

    2013-06-01

    If an ensemble of macroscopic particles is mechanically agitated the constant energy input is dissipated into the system by multiple inelastic collisions. As a result, the granular material can exhibit, depending on the magnitude of agitation, several physical states - like a gaseous phase for high energy input or a condensed state for low agitation. Here we introduce a new method for quantifying the acoustical response of the granular system. Our experimental system consists of a monodisperse packing of glass beads with a free upper surface, which is confined inside a cylindrical container. An electro-mechanical shaker exerts a sinusoidal vertical vibration at normalized accelerations well above the fluidization threshold for a monolayer of particles. By increasing the number of beads the granular gas suddenly collapses if a critical threshold is exceeded. The transition can be detected easily with a microphone connected to the soundcard of a PC. From the recorded audio track a FFT is calculated in real-time. Depending on either the number of particles at a fixed acceleration or the amount of energy input for a given number of particles, the resulting rattling noise exhibits a power spectrum with either the dominating (shaker) frequency plus higher harmonics for a granular crystal or a high-frequency broad-band noise for a granular gas, respectively. Our new method demonstrates that it is possible to quantify analytically the subjective audio impressions of a careful listener and thus to distinguish easily between different aggregation states of an excited granular system.

  18. The broadband social acoustic signaling behavior of spinner and spotted dolphins

    NASA Astrophysics Data System (ADS)

    Lammers, Marc O.; Au, Whitlow W. L.; Herzing, Denise L.

    2003-09-01

    Efforts to study the social acoustic signaling behavior of delphinids have traditionally been restricted to audio-range (<20 kHz) analyses. To explore the occurrence of communication signals at ultrasonic frequencies, broadband recordings of whistles and burst pulses were obtained from two commonly studied species of delphinids, the Hawaiian spinner dolphin (Stenella longirostris) and the Atlantic spotted dolphin (Stenella frontalis). Signals were quantitatively analyzed to establish their full bandwidth, to identify distinguishing characteristics between each species, and to determine how often they occur beyond the range of human hearing. Fundamental whistle contours were found to extend beyond 20 kHz only rarely among spotted dolphins, but with some regularity in spinner dolphins. Harmonics were present in the majority of whistles and varied considerably in their number, occurrence, and amplitude. Many whistles had harmonics that extended past 50 kHz and some reached as high as 100 kHz. The relative amplitude of harmonics and the high hearing sensitivity of dolphins to equivalent frequencies suggest that harmonics are biologically relevant spectral features. The burst pulses of both species were found to be predominantly ultrasonic, often with little or no energy below 20 kHz. The findings presented reveal that the social signals produced by spinner and spotted dolphins span the full range of their hearing sensitivity, are spectrally quite varied, and in the case of burst pulses are probably produced more frequently than reported by audio-range analyses.

  19. The broadband social acoustic signaling behavior of spinner and spotted dolphins.

    PubMed

    Lammers, Marc O; Au, Whitlow W L; Herzing, Denise L

    2003-09-01

    Efforts to study the social acoustic signaling behavior of delphinids have traditionally been restricted to audio-range (<20 kHz) analyses. To explore the occurrence of communication signals at ultrasonic frequencies, broadband recordings of whistles and burst pulses were obtained from two commonly studied species of delphinids, the Hawaiian spinner dolphin (Stenella longirostris) and the Atlantic spotted dolphin (Stenella frontalis). Signals were quantitatively analyzed to establish their full bandwidth, to identify distinguishing characteristics between each species, and to determine how often they occur beyond the range of human hearing. Fundamental whistle contours were found to extend beyond 20 kHz only rarely among spotted dolphins, but with some regularity in spinner dolphins. Harmonics were present in the majority of whistles and varied considerably in their number, occurrence, and amplitude. Many whistles had harmonics that extended past 50 kHz and some reached as high as 100 kHz. The relative amplitude of harmonics and the high hearing sensitivity of dolphins to equivalent frequencies suggest that harmonics are biologically relevant spectral features. The burst pulses of both species were found to be predominantly ultrasonic, often with little or no energy below 20 kHz. The findings presented reveal that the social signals produced by spinner and spotted dolphins span the full range of their hearing sensitivity, are spectrally quite varied, and in the case of burst pulses are probably produced more frequently than reported by audio-range analyses.

  20. Sounding ruins: reflections on the production of an 'audio drift'.

    PubMed

    Gallagher, Michael

    2015-07-01

    This article is about the use of audio media in researching places, which I term 'audio geography'. The article narrates some episodes from the production of an 'audio drift', an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners' attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies.

  1. Sounding ruins: reflections on the production of an ‘audio drift’

    PubMed Central

    Gallagher, Michael

    2014-01-01

    This article is about the use of audio media in researching places, which I term ‘audio geography’. The article narrates some episodes from the production of an ‘audio drift’, an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners’ attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies. PMID:29708107

  2. DETECTOR FOR MODULATED AND UNMODULATED SIGNALS

    DOEpatents

    Patterson, H.H.; Webber, G.H.

    1959-08-25

    An r-f signal-detecting device is described, which is embodied in a compact coaxial circuit principally comprising a detecting crystal diode and a modulating crystal diode connected in parallel. Incoming modulated r-f signals are demodulated by the detecting crystal diode to furnish an audio input to an audio amplifier. The detecting diode will not, however, produce an audio signal from an unmodulated r-f signal. In order that unmodulated signals may be detected, such incoming signals have a locally produced audio signal superimposed on them at the modulating crystal diode and then the"induced or artificially modulated" signal is reflected toward the detecting diode which in the process of demodulation produces an audio signal for the audio amplifier.

  3. Speech information retrieval: a review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Henry, Michael J.

    Audio is an information-rich component of multimedia. Information can be extracted from audio in a number of different ways, and thus there are several established audio signal analysis research fields. These fields include speech recognition, speaker recognition, audio segmentation and classification, and audio finger-printing. The information that can be extracted from tools and methods developed in these fields can greatly enhance multimedia systems. In this paper, we present the current state of research in each of the major audio analysis fields. The goal is to introduce enough back-ground for someone new in the field to quickly gain high-level understanding andmore » to provide direction for further study.« less

  4. Frequency and time domain three-dimensional inversion of electromagnetic data for a grounded-wire source

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul

    2015-01-01

    We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.

  5. A digital audio/video interleaving system. [for Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Richards, R. W.

    1978-01-01

    A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.

  6. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  7. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  8. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  9. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  10. 78 FR 18416 - Sixth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... 226, Audio Systems and Equipment. DATES: The meeting will be held April 15-17, 2013 from 9:00 a.m.-5...

  11. Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study

    ERIC Educational Resources Information Center

    Romero-Fresco, Pablo; Fryer, Louise

    2013-01-01

    Introduction: Time constraints limit the quantity and type of information conveyed in audio description (AD) for films, in particular the cinematic aspects. Inspired by introductory notes for theatre AD, this study developed audio introductions (AIs) for "Slumdog Millionaire" and "Man on Wire." Each AI comprised 10 minutes of…

  12. Audio-Vision: Audio-Visual Interaction in Desktop Multimedia.

    ERIC Educational Resources Information Center

    Daniels, Lee

    Although sophisticated multimedia authoring applications are now available to amateur programmers, the use of audio in of these programs has been inadequate. Due to the lack of research in the use of audio in instruction, there are few resources to assist the multimedia producer in using sound effectively and efficiently. This paper addresses the…

  13. A Longitudinal, Quantitative Study of Student Attitudes towards Audio Feedback for Assessment

    ERIC Educational Resources Information Center

    Parkes, Mitchell; Fletcher, Peter

    2017-01-01

    This paper reports on the findings of a three-year longitudinal study investigating the experiences of postgraduate level students who were provided with audio feedback for their assessment. Results indicated that students positively received audio feedback. Overall, students indicated a preference for audio feedback over written feedback. No…

  14. Audio-Tutorial Instruction: A Strategy For Teaching Introductory College Geology.

    ERIC Educational Resources Information Center

    Fenner, Peter; Andrews, Ted F.

    The rationale of audio-tutorial instruction is discussed, and the history and development of the audio-tutorial botany program at Purdue University is described. Audio-tutorial programs in geology at eleven colleges and one school are described, illustrating several ways in which programs have been developed and integrated into courses. Programs…

  15. Audio-video decision support for patients: the documentary genré as a basis for decision aids.

    PubMed

    Volandes, Angelo E; Barry, Michael J; Wood, Fiona; Elwyn, Glyn

    2013-09-01

    Decision support tools are increasingly using audio-visual materials. However, disagreement exists about the use of audio-visual materials as they may be subjective and biased. This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio-visual materials. Three concerns arising from documentary film studies as they apply to the use of audio-visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio-visual materials (selection bias) and how to ensure objectivity (editorial bias). Decision science needs to start a debate about how audio-visual materials are to be used in decision support tools. Simply because audio-visual materials may be subjective and open to bias does not mean that we should not use them. Methods need to be found to ensure consensus around balance and editorial control, such that audio-visual materials can be used. © 2011 John Wiley & Sons Ltd.

  16. Audio Motor Training at the Foot Level Improves Space Representation.

    PubMed

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body.

  17. Audio Motor Training at the Foot Level Improves Space Representation

    PubMed Central

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body. PMID:29326564

  18. Effectiveness and Comparison of Various Audio Distraction Aids in Management of Anxious Dental Paediatric Patients.

    PubMed

    Navit, Saumya; Johri, Nikita; Khan, Suleman Abbas; Singh, Rahul Kumar; Chadha, Dheera; Navit, Pragati; Sharma, Anshul; Bahuguna, Rachana

    2015-12-01

    Dental anxiety is a widespread phenomenon and a concern for paediatric dentistry. The inability of children to deal with threatening dental stimuli often manifests as behaviour management problems. Nowadays, the use of non-aversive behaviour management techniques is more advocated, which are more acceptable to parents, patients and practitioners. Therefore, this present study was conducted to find out which audio aid was the most effective in the managing anxious children. The aim of the present study was to compare the efficacy of audio-distraction aids in reducing the anxiety of paediatric patients while undergoing various stressful and invasive dental procedures. The objectives were to ascertain whether audio distraction is an effective means of anxiety management and which type of audio aid is the most effective. A total number of 150 children, aged between 6 to 12 years, randomly selected amongst the patients who came for their first dental check-up, were placed in five groups of 30 each. These groups were the control group, the instrumental music group, the musical nursery rhymes group, the movie songs group and the audio stories group. The control group was treated under normal set-up & audio group listened to various audio presentations during treatment. Each child had four visits. In each visit, after the procedures was completed, the anxiety levels of the children were measured by the Venham's Picture Test (VPT), Venham's Clinical Rating Scale (VCRS) and pulse rate measurement with the help of pulse oximeter. A significant difference was seen between all the groups for the mean pulse rate, with an increase in subsequent visit. However, no significant difference was seen in the VPT & VCRS scores between all the groups. Audio aids in general reduced anxiety in comparison to the control group, and the most significant reduction in anxiety level was observed in the audio stories group. The conclusion derived from the present study was that audio distraction was effective in reducing anxiety and audio-stories were the most effective.

  19. Methods of recording and analysing cough sounds.

    PubMed

    Subburaj, S; Parvez, L; Rajagopalan, T G

    1996-01-01

    Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.

  20. An Audio Jack-Based Electrochemical Impedance Spectroscopy Sensor for Point-of-Care Diagnostics.

    PubMed

    Jiang, Haowei; Sun, Alex; Venkatesh, A G; Hall, Drew A

    2017-02-01

    Portable and easy-to-use point-of-care (POC) diagnostic devices hold high promise for dramatically improving public health and wellness. In this paper, we present a mobile health (mHealth) immunoassay platform based on audio jack embedded devices, such as smartphones and laptops, that uses electrochemical impedance spectroscopy (EIS) to detect binding of target biomolecules. Compared to other biomolecular detection tools, this platform is intended to be used as a plug-and-play peripheral that reuses existing hardware in the mobile device and does not require an external battery, thereby improving upon its convenience and portability. Experimental data using a passive circuit network to mimic an electrochemical cell demonstrate that the device performs comparably to laboratory grade instrumentation with 0.3% and 0.5° magnitude and phase error, respectively, over a 17 Hz to 17 kHz frequency range. The measured power consumption is 2.5 mW with a dynamic range of 60 dB. This platform was verified by monitoring the real-time formation of a NeutrAvidin self-assembled monolayer (SAM) on a gold electrode demonstrating the potential for POC diagnostics.

  1. Audio magnetotelluric study applied to hydrogeology at Santo Tomás Valley, Baja California, México

    NASA Astrophysics Data System (ADS)

    Islas, A. C.; Romo, J. M.

    2009-12-01

    The Santo Tomás valley, located 50 km southeast of Ensenada, Baja California, is one of the most important viniculture zones in all of Mexico. Therefore, aquifer characterization is very important for the area. A geophysical study was conducted using the audio-magnetotelluric method (AMT) to determinate the electric conductivity of the basin. 82 AMT stations were measured in three profiles with a North-South orientation. Data was collected using a Stratagem EH4 (by Geometrics) in frequencies between 10 Hz to 100 kHz. To determinate basement and water table depths we made 2D ground resistivity models, using an inversion regularized algorithm. The results show a conductive zone from a few meters up to depths of 200 meters; this unit can be interpreted as the aquifer zone. The models show a less conductive zone (~1000 Ohm-m) in the first 20 meters, which is interpreted as the vadose zone. Finally, we have a very resistive unit corresponding to the basement, estimated around 200 meters depth.

  2. The Pulsar Quartet: Listening to a Galactic Symphony

    NASA Astrophysics Data System (ADS)

    Kiziltan, Bülent

    2014-06-01

    Pulsars are exotic dead stars that emit very regular radio pulses. These pulses are attributed to their regular rotation. Some pulsars are spinning fast enough that the audio equivalent waveform of their pulses fall within our hearing range. If human ears were tuned to radio waves it would have been possible to ‘hear’ these very compact stars. We produced the audio waveform of these pulsar signals and mapped them onto a frequency chart to find the corresponding musical notes. We use these ‘audible' pulsars like musical instruments in a symphony orchestra to play a full quartet. At the same time, an accompanying visual interface shows the realistic distribution of all pulsars in our own Galaxy. Pulsars shine as they play each note in the quartet with realistic brightening and subsequent dimming proportional to their rotational energies. This can serve as an educational tool at all levels to demonstrate many interesting aspects of stellar evolution and articulate an aesthetic connection of us with the cosmos. Interested in watching the light show while the Milky Way Pulsar Orchestra plays a quartet?

  3. Responding Effectively to Composition Students: Comparing Student Perceptions of Written and Audio Feedback

    ERIC Educational Resources Information Center

    Bilbro, J.; Iluzada, C.; Clark, D. E.

    2013-01-01

    The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…

  4. Design and Usability Testing of an Audio Platform Game for Players with Visual Impairments

    ERIC Educational Resources Information Center

    Oren, Michael; Harding, Chris; Bonebright, Terri L.

    2008-01-01

    This article reports on the evaluation of a novel audio platform game that creates a spatial, interactive experience via audio cues. A pilot study with players with visual impairments, and usability testing comparing the visual and audio game versions using both sighted players and players with visual impairments, revealed that all the…

  5. 78 FR 57673 - Eighth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... Committee 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... Committee 226, Audio Systems and Equipment. DATES: The meeting will be held October 8-10, 2012 from 9:00 a.m...

  6. 77 FR 37732 - Fourteenth Meeting: RTCA Special Committee 224, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... Committee 224, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 224, Audio Systems and Equipment. SUMMARY... Committee 224, Audio Systems and Equipment. DATES: The meeting will be held July 11, 2012, from 10 a.m.-4 p...

  7. The Use of Asynchronous Audio Feedback with Online RN-BSN Students

    ERIC Educational Resources Information Center

    London, Julie E.

    2013-01-01

    The use of audio technology by online nursing educators is a recent phenomenon. Research has been conducted in the area of audio technology in different domains and populations, but very few researchers have focused on nursing. Preliminary results have indicated that using audio in place of text can increase student cognition and socialization.…

  8. Computerized Audio-Visual Instructional Sequences (CAVIS): A Versatile System for Listening Comprehension in Foreign Language Teaching.

    ERIC Educational Resources Information Center

    Aleman-Centeno, Josefina R.

    1983-01-01

    Discusses the development and evaluation of CAVIS, which consists of an Apple microcomputer used with audiovisual dialogs. Includes research on the effects of three conditions: (1) computer with audio and visual, (2) computer with audio alone and (3) audio alone in short-term and long-term recall. (EKN)

  9. Low-delay predictive audio coding for the HIVITS HDTV codec

    NASA Astrophysics Data System (ADS)

    McParland, A. K.; Gilchrist, N. H. C.

    1995-01-01

    The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.

  10. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  11. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  12. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  13. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  14. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less

  15. Propagation of sound in highly porous open-cell elastic foams

    NASA Technical Reports Server (NTRS)

    Lambert, R. F.

    1983-01-01

    This work presents both theoretical predictions and experimental measurements of attenuation and progressive phase constants of sound in open-cell, highly porous, elastic polyurethane foams. The foams are available commercially in graded pore sizes for which information about the static flow resistance, thermal time constant, volume porosity, dynamic structure factor, and speed of sound is known. The analysis is specialized to highly porous foams which can be efficient sound absorbers at audio frequencies. Negligible effect of internal wave coupling on attenuation and phase shift for the frequency range 16-6000 Hz was predicted and no experimentally significant effects were observed in the bulk samples studied. The agreement between predictions and measurements in bulk materials is excellent. The analysis is applicable to both the regular and compressed elastic open-cell foams.

  16. Comparing the Effects of Classroom Audio-Recording and Video-Recording on Preservice Teachers' Reflection of Practice

    ERIC Educational Resources Information Center

    Bergman, Daniel

    2015-01-01

    This study examined the effects of audio and video self-recording on preservice teachers' written reflections. Participants (n = 201) came from a secondary teaching methods course and its school-based (clinical) fieldwork. The audio group (n[subscript A] = 106) used audio recorders to monitor their teaching in fieldwork placements; the video group…

  17. Transana Qualitative Video and Audio Analysis Software as a Tool for Teaching Intellectual Assessment Skills to Graduate Psychology Students

    ERIC Educational Resources Information Center

    Rush, S. Craig

    2014-01-01

    This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…

  18. Development and Assessment of Web Courses That Use Streaming Audio and Video Technologies.

    ERIC Educational Resources Information Center

    Ingebritsen, Thomas S.; Flickinger, Kathleen

    Iowa State University, through a program called Project BIO (Biology Instructional Outreach), has been using RealAudio technology for about 2 years in college biology courses that are offered entirely via the World Wide Web. RealAudio is a type of streaming media technology that can be used to deliver audio content and a variety of other media…

  19. Audio distribution and Monitoring Circuit

    NASA Technical Reports Server (NTRS)

    Kirkland, J. M.

    1983-01-01

    Versatile circuit accepts and distributes TV audio signals. Three-meter audio distribution and monitoring circuit provides flexibility in monitoring, mixing, and distributing audio inputs and outputs at various signal and impedance levels. Program material is simultaneously monitored on three channels, or single-channel version built to monitor transmitted or received signal levels, drive speakers, interface to building communications, and drive long-line circuits.

  20. Hearing You Loud and Clear: Student Perspectives of Audio Feedback in Higher Education

    ERIC Educational Resources Information Center

    Gould, Jill; Day, Pat

    2013-01-01

    The use of audio feedback for students in a full-time community nursing degree course is appraised. The aim of this mixed methods study was to examine student views on audio feedback for written assignments. Questionnaires and a focus group were used to capture student opinion of this pilot project. The majority of students valued audio feedback…

  1. How we give personalised audio feedback after summative OSCEs.

    PubMed

    Harrison, Christopher J; Molyneux, Adrian J; Blackwell, Sara; Wass, Valerie J

    2015-04-01

    Students often receive little feedback after summative objective structured clinical examinations (OSCEs) to enable them to improve their performance. Electronic audio feedback has shown promise in other educational areas. We investigated the feasibility of electronic audio feedback in OSCEs. An electronic OSCE system was designed, comprising (1) an application for iPads allowing examiners to mark in the key consultation skill domains, provide "tick-box" feedback identifying strengths and difficulties, and record voice feedback; (2) a feedback website giving students the opportunity to view/listen in multiple ways to the feedback. Acceptability of the audio feedback was investigated, using focus groups with students and questionnaires with both examiners and students. 87 (95%) students accessed the examiners' audio comments; 83 (90%) found the comments useful and 63 (68%) reported changing the way they perform a skill as a result of the audio feedback. They valued its highly personalised, relevant nature and found it much more useful than written feedback. Eighty-nine per cent of examiners gave audio feedback to all students on their stations. Although many found the method easy, lack of time was a factor. Electronic audio feedback provides timely, personalised feedback to students after a summative OSCE provided enough time is allocated to the process.

  2. Space Shuttle Orbiter audio subsystem. [to communication and tracking system

    NASA Technical Reports Server (NTRS)

    Stewart, C. H.

    1978-01-01

    The selection of the audio multiplex control configuration for the Space Shuttle Orbiter audio subsystem is discussed and special attention is given to the evaluation criteria of cost, weight and complexity. The specifications and design of the subsystem are described and detail is given to configurations of the audio terminal and audio central control unit (ATU, ACCU). The audio input from the ACCU, at a signal level of -12.2 to 14.8 dBV, nominal range, at 1 kHz, was found to have balanced source impedance and a balanced local impedance of 6000 + or - 600 ohms at 1 kHz, dc isolated. The Lyndon B. Johnson Space Center (JSC) electroacoustic test laboratory, an audio engineering facility consisting of a collection of acoustic test chambers, analyzed problems of speaker and headset performance, multiplexed control data coupled with audio channels, and the Orbiter cabin acoustic effects on the operational performance of voice communications. This system allows technical management and project engineering to address key constraining issues, such as identifying design deficiencies of the headset interface unit and the assessment of the Orbiter cabin performance of voice communications, which affect the subsystem development.

  3. Spatialized audio improves call sign recognition during multi-aircraft control.

    PubMed

    Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J

    2018-07-01

    We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.

  4. Implementing Audio-CASI on Windows’ Platforms

    PubMed Central

    Cooley, Philip C.; Turner, Charles F.

    2011-01-01

    Audio computer-assisted self interviewing (Audio-CASI) technologies have recently been shown to provide important and sometimes dramatic improvements in the quality of survey measurements. This is particularly true for measurements requiring respondents to divulge highly sensitive information such as their sexual, drug use, or other sensitive behaviors. However, DOS-based Audio-CASI systems that were designed and adopted in the early 1990s have important limitations. Most salient is the poor control they provide for manipulating the video presentation of survey questions. This article reports our experiences adapting Audio-CASI to Microsoft Windows 3.1 and Windows 95 platforms. Overall, our Windows-based system provided the desired control over video presentation and afforded other advantages including compatibility with a much wider array of audio devices than our DOS-based Audio-CASI technologies. These advantages came at the cost of increased system requirements --including the need for both more RAM and larger hard disks. While these costs will be an issue for organizations converting large inventories of PCS to Windows Audio-CASI today, this will not be a serious constraint for organizations and individuals with small inventories of machines to upgrade or those purchasing new machines today. PMID:22081743

  5. Audio Steganography with Embedded Text

    NASA Astrophysics Data System (ADS)

    Teck Jian, Chua; Chai Wen, Chuah; Rahman, Nurul Hidayah Binti Ab.; Hamid, Isredza Rahmi Binti A.

    2017-08-01

    Audio steganography is about hiding the secret message into the audio. It is a technique uses to secure the transmission of secret information or hide their existence. It also may provide confidentiality to secret message if the message is encrypted. To date most of the steganography software such as Mp3Stego and DeepSound use block cipher such as Advanced Encryption Standard or Data Encryption Standard to encrypt the secret message. It is a good practice for security. However, the encrypted message may become too long to embed in audio and cause distortion of cover audio if the secret message is too long. Hence, there is a need to encrypt the message with stream cipher before embedding the message into the audio. This is because stream cipher provides bit by bit encryption meanwhile block cipher provide a fixed length of bits encryption which result a longer output compare to stream cipher. Hence, an audio steganography with embedding text with Rivest Cipher 4 encryption cipher is design, develop and test in this project.

  6. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  7. Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.

    PubMed

    Alm, Magnus; Behne, Dawn

    2013-10-01

    Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.

  8. WebGL and web audio software lightweight components for multimedia education

    NASA Astrophysics Data System (ADS)

    Chang, Xin; Yuksel, Kivanc; Skarbek, Władysław

    2017-08-01

    The paper presents the results of our recent work on development of contemporary computing platform DC2 for multimedia education usingWebGL andWeb Audio { the W3C standards. Using literate programming paradigm the WEBSA educational tools were developed. It offers for a user (student), the access to expandable collection of WEBGL Shaders and web Audio scripts. The unique feature of DC2 is the option of literate programming, offered for both, the author and the reader in order to improve interactivity to lightweightWebGL andWeb Audio components. For instance users can define: source audio nodes including synthetic sources, destination audio nodes, and nodes for audio processing such as: sound wave shaping, spectral band filtering, convolution based modification, etc. In case of WebGL beside of classic graphics effects based on mesh and fractal definitions, the novel image processing analysis by shaders is offered like nonlinear filtering, histogram of gradients, and Bayesian classifiers.

  9. Radio broadcasting via satellite

    NASA Astrophysics Data System (ADS)

    Helm, Neil R.; Pritchard, Wilbur L.

    1990-10-01

    Market areas offering potential for future narrowband broadcast satellites are examined, including international public diplomacy, government- and advertising-supported, and business-application usages. Technical issues such as frequency allocation, spacecraft types, transmission parameters, and radio receiver characteristics are outlined. Service and system requirements, advertising revenue, and business communications services are among the economic issues discussed. The institutional framework required to provide an operational radio broadcast service is studied, and new initiatives in direct broadcast audio radio systems, encompassing studies, tests, in-orbit demonstrations of, and proposals for national and international commercial broadcast services are considered.

  10. MSFC Skylab instrumentation and communication system mission evaluation

    NASA Technical Reports Server (NTRS)

    Adair, B. M.

    1974-01-01

    An evaluation of the in-orbit performance of the instrumentation and communications systems installed on Skylab is presented. Performance is compared with functional requirements and the fidelity of communications. In-orbit performance includes processing engineering, scientific, experiment, and biomedical data, implementing ground-generated commands, audio and video communication, generating rendezvous ranging information, and radio frequency transmission and reception. A history of the system evolution based on the functional requirements and a physical description of the launch configuration is included. The report affirms that the instrumentation and communication system satisfied all imposed requirements.

  11. Audio-based deep music emotion recognition

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Han, Li; Ma, Liangkai; Guo, Dongwei

    2018-05-01

    As the rapid development of multimedia networking, more and more songs are issued through the Internet and stored in large digital music libraries. However, music information retrieval on these libraries can be really hard, and the recognition of musical emotion is especially challenging. In this paper, we report a strategy to recognize the emotion contained in songs by classifying their spectrograms, which contain both the time and frequency information, with a convolutional neural network (CNN). The experiments conducted on the l000-song dataset indicate that the proposed model outperforms traditional machine learning method.

  12. ASTP video tape recorder ground support equipment (audio/CTE splitter/interleaver). Operations manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A descriptive handbook for the audio/CTE splitter/interleaver (RCA part No. 8673734-502) was presented. This unit is designed to perform two major functions: extract audio and time data from an interleaved video/audio signal (splitter section), and provide a test interleaved video/audio/CTE signal for the system (interleaver section). It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.

  13. Fuzzy Logic-Based Audio Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, M.

    2008-11-01

    Audio and audio-pattern recognition is becoming one of the most important technologies to automatically control embedded systems. Fuzzy logic may be the most important enabling methodology due to its ability to rapidly and economically model such application. An audio and audio-pattern recognition engine based on fuzzy logic has been developed for use in very low-cost and deeply embedded systems to automate human-to-machine and machine-to-machine interaction. This engine consists of simple digital signal-processing algorithms for feature extraction and normalization, and a set of pattern-recognition rules manually tuned or automatically tuned by a self-learning process.

  14. Paper-Based Textbooks with Audio Support for Print-Disabled Students.

    PubMed

    Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko

    2015-01-01

    Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.

  15. "Singing in the Tube"--audiovisual assay of plant oil repellent activity against mosquitoes (Culex pipiens).

    PubMed

    Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang

    2016-01-01

    Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses.

  16. Horatio Audio-Describes Shakespeare's "Hamlet": Blind and Low-Vision Theatre-Goers Evaluate an Unconventional Audio Description Strategy

    ERIC Educational Resources Information Center

    Udo, J. P.; Acevedo, B.; Fels, D. I.

    2010-01-01

    Audio description (AD) has been introduced as one solution for providing people who are blind or have low vision with access to live theatre, film and television content. However, there is little research to inform the process, user preferences and presentation style. We present a study of a single live audio-described performance of Hart House…

  17. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Digital Audio Application to Short Wave Broadcasting

    NASA Technical Reports Server (NTRS)

    Chen, Edward Y.

    1997-01-01

    Digital audio is becoming prevalent not only in consumer electornics, but also in different broadcasting media. Terrestrial analog audio broadcasting in the AM and FM bands will be eventually be replaced by digital systems.

  19. Steganalysis of recorded speech

    NASA Astrophysics Data System (ADS)

    Johnson, Micah K.; Lyu, Siwei; Farid, Hany

    2005-03-01

    Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.

  20. Interaction of vortices with flexible piezoelectric beams

    NASA Astrophysics Data System (ADS)

    Goushcha, Oleg; Akaydin, Huseyin Dogus; Elvin, Niell; Andreopoulos, Yiannis

    2012-11-01

    A cantilever piezoelectric beam immersed in a flow is used to harvest fluidic energy. Pressure distribution induced by naturally present vortices in a turbulent fluid flow can force the beam to oscillate producing electrical output. Maximizing the power output of such an electromechanical fluidic system is a challenge. In order to understand the behavior of the beam in a fluid flow where vortices of different scales are present, an experimental facility was set up to study the interaction of individual vortices with the beam. In our set up, vortex rings produced by an audio speaker travel at specific distances from the beam or impinge on it, with a frequency varied up to the natural frequency of the beam. Depending on this frequency both constructive and destructive interactions between the vortices and the beam are observed. Vortices traveling over the beam with a frequency multiple of the natural frequency of the beam cause the beam to resonate and larger deflection amplitudes are observed compared to excitation from a single vortex. PIV is used to compute the flow field and circulation of each vortex and estimate the effect of pressure distribution on the beam deflection. Sponsored by NSF Grant: CBET #1033117.

  1. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.

  2. Design and Implementation of a Video-Zoom Driven Digital Audio-Zoom System for Portable Digital Imaging Devices

    NASA Astrophysics Data System (ADS)

    Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won

    In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.

  3. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  4. Impact of audio narrated animation on students' understanding and learning environment based on gender

    NASA Astrophysics Data System (ADS)

    Nasrudin, Ajeng Ratih; Setiawan, Wawan; Sanjaya, Yayan

    2017-05-01

    This study is titled the impact of audio narrated animation on students' understanding in learning humanrespiratory system based on gender. This study was conducted in eight grade of junior high school. This study aims to investigate the difference of students' understanding and learning environment at boys and girls classes in learning human respiratory system using audio narrated animation. Research method that is used is quasy experiment with matching pre-test post-test comparison group design. The procedures of study are: (1) preliminary study and learning habituation using audio narrated animation; (2) implementation of learning using audio narrated animation and taking data; (3) analysis and discussion. The result of analysis shows that there is significant difference on students' understanding and learning environment at boys and girls classes in learning human respiratory system using audio narrated animation, both in general and specifically in achieving learning indicators. The discussion related to the impact of audio narrated animation, gender characteristics, and constructivist learning environment. It can be concluded that there is significant difference of students' understanding at boys and girls classes in learning human respiratory system using audio narrated animation. Additionally, based on interpretation of students' respond, there is the difference increment of agreement level in learning environment.

  5. Real Time Implementation of an LPC Algorithm. Speech Signal Processing Research at CHI

    DTIC Science & Technology

    1975-05-01

    SIGNAL PROCESSING HARDWARE 2-1 2.1 INTRODUCTION 2-1 2.2 TWO- CHANNEL AUDIO SIGNAL SYSTEM 2-2 2.3 MULTI- CHANNEL AUDIO SIGNAL SYSTEM 2-5 2.3.1... Channel Audio Signal System 2-30 I ii kv^i^ünt«.jfc*. ji .„* ,:-v*. ’.ii. *.. ...... — ■ -,,.,-c-» —ipponp ■^ TOHaBWgBpwiBWgPlpaiPWgW v.«.wN...Messages .... 1-55 1-13. Lost or Out of Order Message 1-56 2-1. Block Diagram of Two- Channel Audio Signal System . . 2-3 2-2. Block Diagram of Audio

  6. Review of Audio Interfacing Literature for Computer-Assisted Music Instruction.

    ERIC Educational Resources Information Center

    Watanabe, Nan

    1980-01-01

    Presents a review of the literature dealing with audio devices used in computer assisted music instruction and discusses the need for research and development of reliable, cost-effective, random access audio hardware. (Author)

  7. Comparison between audio-only and audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy.

    PubMed

    Yu, Jesang; Choi, Ji Hoon; Ma, Sun Young; Jeung, Tae Sig; Lim, Sangwook

    2015-09-01

    To compare audio-only biofeedback to conventional audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy, limiting damage to healthy surrounding tissues caused by organ movement. Six healthy volunteers were assisted by audiovisual or audio-only biofeedback systems to regulate their respirations. Volunteers breathed through a mask developed for this study by following computer-generated guiding curves displayed on a screen, combined with instructional sounds. They then performed breathing following instructional sounds only. The guiding signals and the volunteers' respiratory signals were logged at 20 samples per second. The standard deviations between the guiding and respiratory curves for the audiovisual and audio-only biofeedback systems were 21.55% and 23.19%, respectively; the average correlation coefficients were 0.9778 and 0.9756, respectively. The regularities between audiovisual and audio-only biofeedback for six volunteers' respirations were same statistically from the paired t-test. The difference between the audiovisual and audio-only biofeedback methods was not significant. Audio-only biofeedback has many advantages, as patients do not require a mask and can quickly adapt to this method in the clinic.

  8. Laser vibrometer measurements and middle ear prostheses

    NASA Astrophysics Data System (ADS)

    Flock, Stephen T.; Dornhoffer, John; Ferguson, Scott

    1997-05-01

    One of us has developed an improved partial ossicular replacement prosthesis that is easier to implant and, based on pilot clinical measurements, results in better high-frequency hearing as compared to patients receiving one of the alternative prostheses. It is hypothesized that the primary reason for this is because of the relatively light weight (about 25 mg) and low compliance of the prosthesis, which could conceivably result in better high frequency vibrational characteristics. The purpose of our initial work was to develop an instrument suitable for objectively testing the vibrational characteristics of prostheses. We have developed a laser based device suitable for measuring the vibrational characteristics of the oval window or other structures of the middle ear. We have tested this device using a piezoelectric transducer excited at audio frequencies, as well as on the oval window in human temporal bones harvested from cadavers. The results illustrate that it is possible to non-invasively monitor the vibrational characteristics of anatomic structures with a very inexpensive photonic device.

  9. S-Band propagation measurements

    NASA Technical Reports Server (NTRS)

    Briskman, Robert D.

    1994-01-01

    A geosynchronous satellite system capable of providing many channels of digital audio radio service (DARS) to mobile platforms within the contiguous United States using S-band radio frequencies is being implemented. The system is designed uniquely to mitigate both multipath fading and outages from physical blockage in the transmission path by use of satellite spatial diversity in combination with radio frequency and time diversity. The system also employs a satellite orbital geometry wherein all mobile platforms in the contiguous United States have elevation angles greater than 20 deg to both of the diversity satellites. Since implementation of the satellite system will require three years, an emulation has been performed using terrestrial facilities in order to allow evaluation of DARS capabilities in advance of satellite system operations. The major objective of the emulation was to prove the feasibility of broadcasting from satellites 30 channels of CD quality programming using S-band frequencies to an automobile equipped with a small disk antenna and to obtain quantitative performance data on S-band propagation in a satellite spatial diversity system.

  10. Dense home-based recordings reveal typical and atypical development of tense/aspect in a child with delayed language development.

    PubMed

    Chin, Iris; Goodwin, Matthew S; Vosoughi, Soroush; Roy, Deb; Naigles, Letitia R

    2018-01-01

    Studies investigating the development of tense/aspect in children with developmental disorders have focused on production frequency and/or relied on short spontaneous speech samples. How children with developmental disorders use future forms/constructions is also unknown. The current study expands this literature by examining frequency, consistency, and productivity of past, present, and future usage, using the Speechome Recorder, which enables collection of dense, longitudinal audio-video recordings of children's speech. Samples were collected longitudinally in a child who was previously diagnosed with autism spectrum disorder, but at the time of the study exhibited only language delay [Audrey], and a typically developing child [Cleo]. While Audrey was comparable to Cleo in frequency and productivity of tense/aspect use, she was atypical in her consistency and production of an unattested future form. Examining additional measures of densely collected speech samples may reveal subtle atypicalities that are missed when relying on only few typical measures of acquisition.

  11. Demodulation RFI statistics for a 3-stage op amp LED circuit

    NASA Astrophysics Data System (ADS)

    Whalen, James J.

    An experiment has been performed to demonstrate the feasibility of combining several methods of electromagnetic-compatibility analysis. The part of the experiment that demonstrates how RF signals cause interference in an audio-frequency (AF) circuit and how the interference can be suppressed is described. The circuit includes three operational amplifiers (op amps) and a light-emitting diode (LED). A 50 percent amplitude-modulated (AM) radio-frequency-interference (RFI) signal is used, varied over the range from 0.1 to 400 MHz. The AM frequency is 1 kHz. The RFI is injected into the inverting input of the first op amp, and the 1-kHz demodulation response of the amplifier is amplified by the second and third op amps and lights the LED to provide a visual display of the existence of RFI. An RFI suppression capacitor was added to reduce the RFI. The demodulation RFI results are presented as scatter plots for 35 741 op amps. Mean values and standard deviations are also shown.

  12. 2D and 3D separate and joint inversion of airborne ZTEM and ground AMT data: Synthetic model studies

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang

    2014-05-01

    The ZTEM (Z-axis Tipper Electromagnetic) method measures naturally occurring audio-frequency magnetic fields and obtains the tipper function that defines the relationship among the three components of the magnetic field. Since the anomalous tipper responses are caused by the presence of lateral resistivity variations, the ZTEM survey is most suited for detecting and delineating conductive bodies extending to considerable depths, such as graphitic dykes encountered in the exploration of unconformity type uranium deposit. Our simulations shows that inversion of ZTEM data can detect reasonably well multiple conductive dykes placed 1 km apart. One important issue regarding ZTEM inversion is the effect of the initial model, because homogeneous half-space and (1D) layered structures produce no responses. For the 2D model with multiple conductive dykes, the inversion results were useful for locating the dykes even when the initial model was not close to the true background resistivity. For general 3D structures, however, the resolution of the conductive bodies can be reduced considerably depending on the initial model. This is because the tipper magnitudes from 3D conductors are smaller due to boundary charges than the 2D responses. To alleviate this disadvantage of ZTEM surveys, we combined ZTEM and audio-frequency magnetotelluric (AMT) data. Inversion of sparse AMT data was shown to be effective in providing a good initial model for ZTEM inversion. Moreover, simultaneously inverting both data sets led to better results than the sequential approach by enabling to identify structural features that were difficult to resolve from the individual data sets.

  13. Subsurface structure imaging of the Sembalun-Propok area, West Nusa Tenggara, Indonesia by using the audio-frequency magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Febriani, F.; Widarto, D. S.; Gaffar, E.; Nasution, A.; Grandis, H.

    2017-07-01

    We have investigated the subsurface structure of the Sembalun-Propok Area, West Nusa Tenggara, by using the audio-frequency magnetotelluric (AMT) method. This area is one of the geothermal prospect areas in eastern Indonesia. There are 38 AMT observation points, which were deployed along three profiles. We applied the phase tensor analysis on all observation points to determine both the dimensionality of and the regional strike of the study area. The results of the phase tensor analysis show that the study area can be assumed as 2-D and the regional strike of the study area is about N330°E. Then, after rotating the impedance tensor data to the regional strike, we carried out the 2-D inversion modeling to know more detail the subsurface structure of the study area. The results of the 2-D MT inversion are consistent with the geology of the study area. The near surface along all profiles is dominated by the higher resistivity layer (> 500 Ωm). It is highly associated with the surface geology of the study area which is characterized by the volcanic rock and mostly consist of andesitic to dacitic rocks of a calc-alkaline suite. Below the resistive layer at the near surface, the modelings show the layer which has the lower-moderate resistivity layer. It is possibly a cap rock layer of geothermal system of the Sembalun-Propok area. Lastly, the third layer is the very conductive layer and possibly associated with the presence of thermal fluids in the study area.

  14. Mining knowledge in noisy audio data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czyzewski, A.

    1996-12-31

    This paper demonstrates a KDD method applied to audio data analysis, particularly, it presents possibilities which result from replacing traditional methods of analysis and acoustic signal processing by KDD algorithms when restoring audio recordings affected by strong noise.

  15. Research into Teleconferencing

    DTIC Science & Technology

    1981-02-01

    Wichman (1970) found more cooperation under conditions of audio- visual communication than conditions of audio communication alone. Laplante (1971) found...was found for audio teleconferences. These results, taken with the results concerning group perfor- mance, seem to indicate that visual communication gives

  16. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  17. Entertainment and Pacification System For Car Seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2006-01-01

    An entertainment and pacification system for use with a child car seat has speakers mounted in the child car seat with a plurality of audio sources and an anti-noise audio system coupled to the child car seat. A controllable switching system provides for, at any given time, the selective activation of i) one of the audio sources such that the audio signal generated thereby is coupled to one or more of the speakers, and ii) the anti-noise audio system such that an ambient-noise-canceling audio signal generated thereby is coupled to one or more of the speakers. The controllable switching system can receive commands generated at one of first controls located at the child car seat and second controls located remotely with respect to the child car seat with commands generated by the second controls overriding commands generated by the first controls.

  18. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  19. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli

    PubMed Central

    Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard

    2016-01-01

    Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli. PMID:27875575

  20. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.

    PubMed

    Necciari, Thibaud; Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard

    2016-01-01

    Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli.

  1. Interdisciplinary Adventures in Perceptual Ecology

    NASA Astrophysics Data System (ADS)

    Bocast, Christopher S.

    A portfolio dissertation that began as acoustic ecology and matured into perceptual ecology, centered on ecomusicology, bioacoustics, and translational audio-based media works with environmental perspectives. The place of music in Western eco-cosmology through time provides a basis for structuring an environmental history of human sound perception. That history suggests that music may stabilize human mental activity, and that an increased musical practice may be essential for the human project. An overview of recent antecedents preceding the emergence of acoustic ecology reveals structural foundations from 20th century culture that underpin modern sound studies. The contextual role that Aldo Leopold, Jacob von Uexkull, John Cage, Marshall McLuhan, and others played in anticipating the development of acoustic ecology as an interdiscipline is detailed. This interdisciplinary aspect of acoustic ecology is defined and defended, while new developments like soundscape ecology are addressed, though ultimately sound studies will need to embrace a broader concept of full-spectrum "sensory" or "perceptual" ecology. The bioacoustic fieldwork done on spawning sturgeon emphasized this necessity. That study yielded scientific recordings and spectrographic analyses of spawning sounds produced by lake sturgeon, Acipenser fulvescens, during reproduction in natural habitats in the Lake Winnebago watershed in Wisconsin. Recordings were made on the Wolf and Embarrass River during the 2011-2013 spawning seasons. Several specimens were dissected to investigate possible sound production mechanisms; no sonic musculature was found. Drumming sounds, ranging from 5 to 7 Hz fundamental frequency, verified the infrasonic nature of previously undocumented "sturgeon thunder". Other characteristic noises of sturgeon spawning including low-frequency rumbles and hydrodynamic sounds were identified. Intriguingly, high-frequency signals resembling electric organ discharges were discovered. These sounds create a distinctive acoustic signature of sturgeon spawning. Media files include concert performance video, sturgeon audio samples, podcasts, radio pieces, music recordings, sound design, and a time-lapse soundscape reconstructed from Aldo Leopold's notes.

  2. Musical stairs: the impact of audio feedback during stair-climbing physical therapies for children.

    PubMed

    Khan, Ajmal; Biddiss, Elaine

    2015-05-01

    Enhanced biofeedback during rehabilitation therapies has the potential to provide a therapeutic environment optimally designed for neuroplasticity. This study investigates the impact of audio feedback on the achievement of a targeted therapeutic goal, namely, use of reciprocal steps. Stair-climbing therapy sessions conducted with and without audio feedback were compared in a randomized AB/BA cross-over study design. Seventeen children, aged 4-7 years, with various diagnoses participated. Reports from the participants, therapists, and a blinded observer were collected to evaluate achievement of the therapeutic goal, motivation and enjoyment during the therapy sessions. Audio feedback resulted in a 5.7% increase (p = 0.007) in reciprocal steps. Levels of participant enjoyment increased significantly (p = 0.031) and motivation was reported by child participants and therapists to be greater when audio feedback was provided. These positive results indicate that audio feedback may influence the achievement of therapeutic goals and promote enjoyment and motivation in young patients engaged in rehabilitation therapies. This study lays the groundwork for future research to determine the long term effects of audio feedback on functional outcomes of therapy. Stair-climbing is an important mobility skill for promoting independence and activities of daily life and is a key component of rehabilitation therapies for physically disabled children. Provision of audio feedback during stair-climbing therapies for young children may increase their achievement of a targeted therapeutic goal (i.e., use of reciprocal steps). Children's motivation and enjoyment of the stair-climbing therapy was enhanced when audio feedback was provided.

  3. Comparing Learning Gains: Audio Versus Text-based Instructor Communication in a Blended Online Learning Environment

    NASA Astrophysics Data System (ADS)

    Shimizu, Dominique

    Though blended course audio feedback has been associated with several measures of course satisfaction at the postsecondary and graduate levels compared to text feedback, it may take longer to prepare and positive results are largely unverified in K-12 literature. The purpose of this quantitative study was to investigate the time investment and learning impact of audio communications with 228 secondary students in a blended online learning biology unit at a central Florida public high school. A short, individualized audio message regarding the student's progress was given to each student in the audio group; similar text-based messages were given to each student in the text-based group on the same schedule; a control got no feedback. A pretest and posttest were employed to measure learning gains in the three groups. To compare the learning gains in two types of feedback with each other and to no feedback, a controlled, randomized, experimental design was implemented. In addition, the creation and posting of audio and text feedback communications were timed in order to assess whether audio feedback took longer to produce than text only feedback. While audio feedback communications did take longer to create and post, there was no difference between learning gains as measured by posttest scores when student received audio, text-based, or no feedback. Future studies using a similar randomized, controlled experimental design are recommended to verify these results and test whether the trend holds in a broader range of subjects, over different time frames, and using a variety of assessment types to measure student learning.

  4. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  5. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  6. 47 CFR 11.54 - EAS operation during a National Level emergency.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... emergency, EAS Participants may transmit in lieu of the EAS audio feed an audio feed of the President's voice message from an alternative source, such as a broadcast network audio feed. [77 FR 16705, Mar. 22...

  7. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  8. 7 CFR 47.14 - Prehearing conferences.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... determines that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent.... If the examiner determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the examiner...

  9. 47 CFR 11.54 - EAS operation during a National Level emergency.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... emergency, EAS Participants may transmit in lieu of the EAS audio feed an audio feed of the President's voice message from an alternative source, such as a broadcast network audio feed. [77 FR 16705, Mar. 22...

  10. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  11. 7 CFR 47.16 - Depositions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... which the deposition is to be conducted (telephone, audio-visual telecommunication, or by personal...) The place of the deposition; (iii) The manner of the deposition (telephone, audio-visual... shall be conducted in the manner (telephone, audio-visual telecommunication, or personal attendance of...

  12. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  13. 47 CFR 11.54 - EAS operation during a National Level emergency.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... emergency, EAS Participants may transmit in lieu of the EAS audio feed an audio feed of the President's voice message from an alternative source, such as a broadcast network audio feed. [77 FR 16705, Mar. 22...

  14. Realization of guitar audio effects using methods of digital signal processing

    NASA Astrophysics Data System (ADS)

    Buś, Szymon; Jedrzejewski, Konrad

    2015-09-01

    The paper is devoted to studies on possibilities of realization of guitar audio effects by means of methods of digital signal processing. As a result of research, some selected audio effects corresponding to the specifics of guitar sound were realized as the real-time system called Digital Guitar Multi-effect. Before implementation in the system, the selected effects were investigated using the dedicated application with a graphical user interface created in Matlab environment. In the second stage, the real-time system based on a microcontroller and an audio codec was designed and realized. The system is designed to perform audio effects on the output signal of an electric guitar.

  15. Power saver circuit for audio/visual signal unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Right, R. W.

    1985-02-12

    A combined audio and visual signal unit with the audio and visual components actuated alternately and powered over a single cable pair in such a manner that only one of the audio and visual components is drawing power from the power supply at any given instant. Thus, the power supply is never called upon to provide more energy than that drawn by the one of the components having the greater power requirement. This is particularly advantageous when several combined audio and visual signal units are coupled in parallel on one cable pair. Typically, the signal unit may comprise a hornmore » and a strobe light for a fire alarm signalling system.« less

  16. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  17. Design of batch audio/video conversion platform based on JavaEE

    NASA Astrophysics Data System (ADS)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  18. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    NASA Astrophysics Data System (ADS)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  19. 7 CFR 1.148 - Depositions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (telephone, audio-visual telecommunication, or personal attendance of those who are to participate in the... that conducting the deposition by audio-visual telecommunication: (i) Is necessary to prevent prejudice... determines that a deposition conducted by audio-visual telecommunication would measurably increase the United...

  20. 9 CFR 202.112 - Rule 12: Oral hearing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... hearing shall be conducted by audio-visual telecommunication unless the presiding officer determines that... hearing by audio-visual telecommunication. If the presiding officer determines that a hearing conducted by audio-visual telecommunication would measurably increase the United States Department of Agriculture's...

  1. 9 CFR 202.112 - Rule 12: Oral hearing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... hearing shall be conducted by audio-visual telecommunication unless the presiding officer determines that... hearing by audio-visual telecommunication. If the presiding officer determines that a hearing conducted by audio-visual telecommunication would measurably increase the United States Department of Agriculture's...

  2. MedlinePlus FAQ: Is audio description available for videos on MedlinePlus?

    MedlinePlus

    ... audiodescription.html Question: Is audio description available for videos on MedlinePlus? To use the sharing features on ... page, please enable JavaScript. Answer: Audio description of videos helps make the content of videos accessible to ...

  3. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset

    PubMed Central

    Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738

  4. StirMark Benchmark: audio watermarking attacks based on lossy compression

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  5. A prospective, randomised, controlled study examining binaural beat audio and pre-operative anxiety in patients undergoing general anaesthesia for day case surgery.

    PubMed

    Padmanabhan, R; Hildreth, A J; Laws, D

    2005-09-01

    Pre-operative anxiety is common and often significant. Ambulatory surgery challenges our pre-operative goal of an anxiety-free patient by requiring people to be 'street ready' within a brief period of time after surgery. Recently, it has been demonstrated that music can be used successfully to relieve patient anxiety before operations, and that audio embedded with tones that create binaural beats within the brain of the listener decreases subjective levels of anxiety in patients with chronic anxiety states. We measured anxiety with the State-Trait Anxiety Inventory questionnaire and compared binaural beat audio (Binaural Group) with an identical soundtrack but without these added tones (Audio Group) and with a third group who received no specific intervention (No Intervention Group). Mean [95% confidence intervals] decreases in anxiety scores were 26.3%[19-33%] in the Binaural Group (p = 0.001 vs. Audio Group, p < 0.0001 vs. No Intervention Group), 11.1%[6-16%] in the Audio Group (p = 0.15 vs. No Intervention Group) and 3.8%[0-7%] in the No Intervention Group. Binaural beat audio has the potential to decrease acute pre-operative anxiety significantly.

  6. Comparison between audio-only and audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy

    PubMed Central

    Yu, Jesang; Choi, Ji Hoon; Ma, Sun Young; Jeung, Tae Sig

    2015-01-01

    Purpose To compare audio-only biofeedback to conventional audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy, limiting damage to healthy surrounding tissues caused by organ movement. Materials and Methods Six healthy volunteers were assisted by audiovisual or audio-only biofeedback systems to regulate their respirations. Volunteers breathed through a mask developed for this study by following computer-generated guiding curves displayed on a screen, combined with instructional sounds. They then performed breathing following instructional sounds only. The guiding signals and the volunteers' respiratory signals were logged at 20 samples per second. Results The standard deviations between the guiding and respiratory curves for the audiovisual and audio-only biofeedback systems were 21.55% and 23.19%, respectively; the average correlation coefficients were 0.9778 and 0.9756, respectively. The regularities between audiovisual and audio-only biofeedback for six volunteers' respirations were same statistically from the paired t-test. Conclusion The difference between the audiovisual and audio-only biofeedback methods was not significant. Audio-only biofeedback has many advantages, as patients do not require a mask and can quickly adapt to this method in the clinic. PMID:26484309

  7. Digital Multicasting of Multiple Audio Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell; Bullock, John

    2007-01-01

    The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.

  8. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Common audio attention signal. 10.520 Section 10.520 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL MOBILE ALERT SYSTEM Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment...

  9. 7 CFR 1.144 - Judges.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... hearing to be conducted by telephone or audio-visual telecommunication; (10) Require each party to provide... prior to any deposition to be conducted by telephone or audio-visual telecommunication; (11) Require that any hearing to be conducted by telephone or audio-visual telecommunication be conducted at...

  10. 22 CFR 61.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS... certification of United States produced audio-visual materials under the provisions of the Beirut Agreement... staff with authority to issue Certificates or Importation Documents. Audio-visual materials—means: (1...

  11. 22 CFR 61.3 - Certification and authentication criteria.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... AUDIO-VISUAL MATERIALS § 61.3 Certification and authentication criteria. (a) The Department shall certify or authenticate audio-visual materials submitted for review as educational, scientific and... of the material. (b) The Department will not certify or authenticate any audio-visual material...

  12. 22 CFR 61.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS... certification of United States produced audio-visual materials under the provisions of the Beirut Agreement... staff with authority to issue Certificates or Importation Documents. Audio-visual materials—means: (1...

  13. 22 CFR 61.3 - Certification and authentication criteria.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... AUDIO-VISUAL MATERIALS § 61.3 Certification and authentication criteria. (a) The Department shall certify or authenticate audio-visual materials submitted for review as educational, scientific and... of the material. (b) The Department will not certify or authenticate any audio-visual material...

  14. 9 CFR 202.110 - Rule 10: Prehearing conference.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice to a party; (ii) Is... presiding officer determines that a prehearing conference conducted by audio-visual telecommunication would... conducted by audio-visual telecommunication unless the presiding officer determines that conducting the...

  15. 9 CFR 202.110 - Rule 10: Prehearing conference.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice to a party; (ii) Is... presiding officer determines that a prehearing conference conducted by audio-visual telecommunication would... conducted by audio-visual telecommunication unless the presiding officer determines that conducting the...

  16. 22 CFR 61.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS... certification of United States produced audio-visual materials under the provisions of the Beirut Agreement... staff with authority to issue Certificates or Importation Documents. Audio-visual materials—means: (1...

  17. 7 CFR 1.144 - Judges.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... hearing to be conducted by telephone or audio-visual telecommunication; (10) Require each party to provide... prior to any deposition to be conducted by telephone or audio-visual telecommunication; (11) Require that any hearing to be conducted by telephone or audio-visual telecommunication be conducted at...

  18. 22 CFR 61.3 - Certification and authentication criteria.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... AUDIO-VISUAL MATERIALS § 61.3 Certification and authentication criteria. (a) The Department shall certify or authenticate audio-visual materials submitted for review as educational, scientific and... of the material. (b) The Department will not certify or authenticate any audio-visual material...

  19. Audio-Tutorial Instruction in Medicine.

    ERIC Educational Resources Information Center

    Boyle, Gloria J.; Herrick, Merlyn C.

    This progress report concerns an audio-tutorial approach used at the University of Missouri-Columbia School of Medicine. Instructional techniques such as slide-tape presentations, compressed speech audio tapes, computer-assisted instruction (CAI), motion pictures, television, microfiche, and graphic and printed materials have been implemented,…

  20. Spatial Audio on the Web: Or Why Can't I hear Anything Over There?

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)

    1997-01-01

    Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.

  1. Channel Compensation for Speaker Recognition using MAP Adapted PLDA and Denoising DNNs

    DTIC Science & Technology

    2016-06-21

    improvement has been the availability of large quantities of speaker-labeled data from telephone recordings. For new data applications, such as audio from...mi- crophone channels to the telephone channel. Audio files were rejected if the alignment process failed. At the end of the pro- cess a total of 873...Microphone 01 AT3035 ( Audio Technica Studio Mic) 02 MX418S (Shure Gooseneck Mic) 03 Crown PZM Soundgrabber II 04 AT Pro45 ( Audio Technica Hanging Mic

  2. A review of lossless audio compression standards and algorithms

    NASA Astrophysics Data System (ADS)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  3. Concurrent visual and tactile steady-state evoked potentials index allocation of inter-modal attention: a frequency-tagging study.

    PubMed

    Porcu, Emanuele; Keitel, Christian; Müller, Matthias M

    2013-11-27

    We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Responses of Middle-Frequency Modulations in Vocal Fundamental Frequency to Different Vocal Intensities and Auditory Feedback.

    PubMed

    Lee, Shao-Hsuan; Fang, Tuan-Jen; Yu, Jen-Fang; Lee, Guo-She

    2017-09-01

    Auditory feedback can make reflexive responses on sustained vocalizations. Among them, the middle-frequency power of F0 (MFP) may provide a sensitive index to access the subtle changes in different auditory feedback conditions. Phonatory airflow temperature was obtained from 20 healthy adults at two vocal intensity ranges under four auditory feedback conditions: (1) natural auditory feedback (NO); (2) binaural speech noise masking (SN); (3) bone-conducted feedback of self-generated voice (BAF); and (4) SN and BAF simultaneously. The modulations of F0 in low-frequency (0.2 Hz-3 Hz), middle-frequency (3 Hz-8 Hz), and high-frequency (8 Hz-25 Hz) bands were acquired using power spectral analysis of F0. Acoustic and aerodynamic analyses were used to acquire vocal intensity, maximum phonation time (MPT), phonatory airflow, and MFP-based vocal efficiency (MBVE). SN and high vocal intensity decreased MFP and raised MBVE and MPT significantly. BAF showed no effect on MFP but significantly lowered MBVE. Moreover, BAF significantly increased the perception of voice feedback and the sensation of vocal effort. Altered auditory feedback significantly changed the middle-frequency modulations of F0. MFP and MBVE could well detect these subtle responses of audio-vocal feedback. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  5. The brief fatigue inventory: comparison of data collection using a novel audio device with conventional paper questionnaire.

    PubMed

    Pallett, Edward; Rentowl, Patricia; Hanning, Christopher

    2009-09-01

    An Electronic Portable Information Collection audio device (EPIC-Vox) has been developed to deliver questionnaires in spoken word format via headphones. Patients respond by pressing buttons on the device. The aims of this study were to determine limits of agreement between, and test-retest reliability of audio (A) and paper (P) versions of the Brief Fatigue Inventory (BFI). Two hundred sixty outpatients (204 male, mean age 55.7 years) attending a sleep disorders clinic were allocated to four groups using block randomization. All completed the BFI twice, separated by a one-minute distracter task. Half the patients completed paper and audio versions, then an evaluation questionnaire. The remainder completed either paper or audio versions to compare test-retest reliability. BFI global scores were analyzed using Bland-Altman methodology. Agreement between categorical fatigue severity scores was determined using Cohen's kappa. The mean (SD) difference between paper and audio scores was -0.04 (0.48). The limits of agreement (mean difference+/-2SD) were -0.93 to +1.00. Test-retest reliability of the paper BFI showed a mean (SD) difference of 0.17 (0.32) between first and second presentations (limits -0.46 to +0.81). For audio, the mean (SD) difference was 0.17 (0.48) (limits -0.79 to +1.14). For agreement between categorical scores, Cohen's kappa=0.73 for P and A, 0.67 (P at test and retest) and 0.87 (A at test and retest). Evaluation preferences (n=128): 36.7% audio; 18.0% paper; and 45.3% no preference. A total of 99.2% found EPIC-Vox "easy to use." These data demonstrate that the English audio version of the BFI provides an acceptable alternative to the paper questionnaire.

  6. Audio-Visual Stimulation in Conjunction with Functional Electrical Stimulation to Address Upper Limb and Lower Limb Movement Disorder.

    PubMed

    Kumar, Deepesh; Verma, Sunny; Bhattacharya, Sutapa; Lahiri, Uttama

    2016-06-13

    Neurological disorders often manifest themselves in the form of movement deficit on the part of the patient. Conventional rehabilitation often used to address these deficits, though powerful are often monotonous in nature. Adequate audio-visual stimulation can prove to be motivational. In the research presented here we indicate the applicability of audio-visual stimulation to rehabilitation exercises to address at least some of the movement deficits for upper and lower limbs. Added to the audio-visual stimulation, we also use Functional Electrical Stimulation (FES). In our presented research we also show the applicability of FES in conjunction with audio-visual stimulation delivered through VR-based platform for grasping skills of patients with movement disorder.

  7. Improvements of ModalMax High-Fidelity Piezoelectric Audio Device

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.

    2005-01-01

    ModalMax audio speakers have been enhanced by innovative means of tailoring the vibration response of thin piezoelectric plates to produce a high-fidelity audio response. The ModalMax audio speakers are 1 mm in thickness. The device completely supplants the need to have a separate driver and speaker cone. ModalMax speakers can perform the same applications of cone speakers, but unlike cone speakers, ModalMax speakers can function in harsh environments such as high humidity or extreme wetness. New design features allow the speakers to be completely submersed in salt water, making them well suited for maritime applications. The sound produced from the ModalMax audio speakers has sound spatial resolution that is readily discernable for headset users.

  8. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  9. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  10. Land Mobile Satellite Service (LMSS) channel simulator: An end-to-end hardware simulation and study of the LMSS communications links

    NASA Technical Reports Server (NTRS)

    Salmasi, A. B. (Editor); Springett, J. C.; Sumida, J. T.; Richter, P. H.

    1984-01-01

    The design and implementation of the Land Mobile Satellite Service (LMSS) channel simulator as a facility for an end to end hardware simulation of the LMSS communications links, primarily with the mobile terminal is described. A number of studies are reported which show the applications of the channel simulator as a facility for validation and assessment of the LMSS design requirements and capabilities by performing quantitative measurements and qualitative audio evaluations for various link design parameters and channel impairments under simulated LMSS operating conditions. As a first application, the LMSS channel simulator was used in the evaluation of a system based on the voice processing and modulation (e.g., NBFM with 30 kHz of channel spacing and a 2 kHz rms frequency deviation for average talkers) selected for the Bell System's Advanced Mobile Phone Service (AMPS). The various details of the hardware design, qualitative audio evaluation techniques, signal to channel impairment measurement techniques, the justifications for criteria of different parameter selection in regards to the voice processing and modulation methods, and the results of a number of parametric studies are further described.

  11. Multimodal indices to Japanese and French prosodically expressed social affects.

    PubMed

    Rilliard, Albert; Shochi, Takaaki; Martin, Jean-Claude; Erickson, Donna; Aubergé, Véronique

    2009-01-01

    Whereas several studies have explored the expression of emotions, little is known on how the visual and audio channels are combined during production of what we call the more controlled social affects, for example, "attitudinal" expressions. This article presents a perception study of the audovisual expression of 12 Japanese and 6 French attitudes in order to understand the contribution of audio and visual modalities for affective communication. The relative importance of each modality in the perceptual decoding of the expressions of four speakers is analyzed as a first step towards a deeper comprehension of their influence on the expression of social affects. Then, the audovisual productions of two speakers (one for each language) are acoustically (F0, duration and intensity) and visually (in terms of Action Units) analyzed, in order to match the relation between objective parameters and listeners' perception of these social affects. The most pertinent objective features, either acoustic or visual, are then discussed, in a bilingual perspective: for example, the relative influence of fundamental frequency for attitudinal expression in both languages is discussed, and the importance of a certain aspect of the voice quality dimension in Japanese is underlined.

  12. Sculpting 3D worlds with music: advanced texturing techniques

    NASA Astrophysics Data System (ADS)

    Greuel, Christian; Bolas, Mark T.; Bolas, Niko; McDowall, Ian E.

    1996-04-01

    Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.

  13. Effects of background noise on acoustic characteristics of Bengalese finch songs.

    PubMed

    Shiba, Shintaro; Okanoya, Kazuo; Tachibana, Ryosuke O

    2016-12-01

    Online regulation of vocalization in response to auditory feedback is one of the essential issues for vocal communication. One such audio-vocal interaction is the Lombard effect, an involuntary increase in vocal amplitude in response to the presence of background noise. Along with vocal amplitude, other acoustic characteristics, including fundamental frequency (F0), also change in some species. Bengalese finches (Lonchura striata var. domestica) are a suitable model for comparative, ethological, and neuroscientific studies on audio-vocal interaction because they require real-time auditory feedback of their own songs to maintain normal singing. Here, the changes in amplitude and F0 with a focus on the distinct song elements (i.e., notes) of Bengalese finches under noise presentation are demonstrated. To accurately analyze these acoustic characteristics, two different bandpass-filtered noises at two levels of sound intensity were used. The results confirmed that the Lombard effect occurs at the note level of Bengalese finch song. Further, individually specific modes of changes in F0 are shown. These behavioral changes suggested the vocal control mechanisms on which the auditory feedback is based have a predictable effect on amplitude, but complex spectral effects on individual note production.

  14. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  15. Identification the geothermal system using 1-D audio-magnetotelluric inversion in Lamongan volcano field, East Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Ilham, N.; Niasari, S. W.

    2018-04-01

    Tiris village, Probolinggo, East Java, is one of geothermal potential areas in Indonesia. This area is located in a valley flank of Mount Lamongan and Argopuro volcanic complex. This research aimed to identify a geothermal system at Tiris area, particularly the fluid pathways. The geothermal potential can be seen from the presence of warm springs with temperature ranging 35-45°C. The warm spring locations are aligned in the same orientation with major fault structure in the area. The fault structure shows dominant northwest-southeast orientation. We used audio-magnetotelluric data in the frequency range of 10 Hz until 92 kHz. The total magnetotelluric sites are 6. From the data analysis, most of the data orientation were 2-D with geo-electrical direction north-south. We used 1-D inversion using Newton algorithm. The 1-D inversion resulted in low resistive anomaly that corresponds to Lamongan lavas. Additionally, the depth of the resistor are different between the area to the west (i.e. 75 m) and to the east (i.e. 25 m). This indicates that there is a fault around the aligned maar (e.g. Ranu Air).

  16. Digital Audio: A Sound Design Element.

    ERIC Educational Resources Information Center

    Barron, Ann; Varnadoe, Susan

    1992-01-01

    Discussion of incorporating audio into videodiscs for multimedia educational applications highlights a project developed for the Navy that used digital audio in an interactive video delivery system (IVDS) for training sonar operators. Storage constraints with videodiscs are explained, design requirements for the IVDS are described, and production…

  17. 22 CFR 61.1 - Purpose.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.1... educational, scientific and cultural audio-visual materials between nations by providing favorable import... issuance or authentication of a certificate that the audio-visual material for which favorable treatment is...

  18. 47 CFR 73.561 - Operating schedule; time sharing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... in triplicate by each licensee with the Commission, Attention: Audio Division, Media Bureau, prior to... the Commission in Washington, DC, Attention: Audio Division, Media Bureau. (d) In the event that... provided that notification is sent to the Commission in Washington, DC, Attention: Audio Division, Media...

  19. 78 FR 36683 - Radio Broadcasting Services; Summit, Mississippi

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    .... SUMMARY: In this document, the Audio Division, at the request of Bowen Broadcasting, allots FM Channel... Audio Division reclassifies Station WQUE-FM, New Orleans, Louisiana, to specify operation on FM Channel... Communications Commission. Nazifa Sawez, Assistant Chief, Audio Division, Media Bureau. For the reasons discussed...

  20. 22 CFR 61.1 - Purpose.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.1... educational, scientific and cultural audio-visual materials between nations by providing favorable import... issuance or authentication of a certificate that the audio-visual material for which favorable treatment is...

Top