Sample records for controlled source audio

  1. Entertainment and Pacification System For Car Seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2006-01-01

    An entertainment and pacification system for use with a child car seat has speakers mounted in the child car seat with a plurality of audio sources and an anti-noise audio system coupled to the child car seat. A controllable switching system provides for, at any given time, the selective activation of i) one of the audio sources such that the audio signal generated thereby is coupled to one or more of the speakers, and ii) the anti-noise audio system such that an ambient-noise-canceling audio signal generated thereby is coupled to one or more of the speakers. The controllable switching system can receive commands generated at one of first controls located at the child car seat and second controls located remotely with respect to the child car seat with commands generated by the second controls overriding commands generated by the first controls.

  2. Perceptually controlled doping for audio source separation

    NASA Astrophysics Data System (ADS)

    Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT

    2014-12-01

    The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.

  3. Space Shuttle Orbiter audio subsystem. [to communication and tracking system

    NASA Technical Reports Server (NTRS)

    Stewart, C. H.

    1978-01-01

    The selection of the audio multiplex control configuration for the Space Shuttle Orbiter audio subsystem is discussed and special attention is given to the evaluation criteria of cost, weight and complexity. The specifications and design of the subsystem are described and detail is given to configurations of the audio terminal and audio central control unit (ATU, ACCU). The audio input from the ACCU, at a signal level of -12.2 to 14.8 dBV, nominal range, at 1 kHz, was found to have balanced source impedance and a balanced local impedance of 6000 + or - 600 ohms at 1 kHz, dc isolated. The Lyndon B. Johnson Space Center (JSC) electroacoustic test laboratory, an audio engineering facility consisting of a collection of acoustic test chambers, analyzed problems of speaker and headset performance, multiplexed control data coupled with audio channels, and the Orbiter cabin acoustic effects on the operational performance of voice communications. This system allows technical management and project engineering to address key constraining issues, such as identifying design deficiencies of the headset interface unit and the assessment of the Orbiter cabin performance of voice communications, which affect the subsystem development.

  4. FIRRE command and control station (C2)

    NASA Astrophysics Data System (ADS)

    Laird, R. T.; Kramer, T. A.; Cruickshanks, J. R.; Curd, K. M.; Thomas, K. M.; Moneyhun, J.

    2006-05-01

    The Family of Integrated Rapid Response Equipment (FIRRE) is an advanced technology demonstration program intended to develop a family of affordable, scalable, modular, and logistically supportable unmanned systems to meet urgent operational force protection needs and requirements worldwide. The near-term goal is to provide the best available unmanned ground systems to the warfighter in Iraq and Afghanistan. The overarching long-term goal is to develop a fully-integrated, layered force protection system of systems for our forward deployed forces that is networked with the future force C4ISR systems architecture. The intent of the FIRRE program is to reduce manpower requirements, enhance force protection capabilities, and reduce casualties through the use of unmanned systems. FIRRE is sponsored by the Office of the Under Secretary of Defense, Acquisitions, Technology and Logistics (OUSD AT&L), and is managed by the Product Manager, Force Protection Systems (PM-FPS). The FIRRE Command and Control (C2) Station supports two operators, hosts the Joint Battlespace Command and Control Software for Manned and Unmanned Assets (JBC2S), and will be able to host Mission Planning and Rehearsal (MPR) software. The C2 Station consists of an M1152 HMMWV fitted with an S-788 TYPE I shelter. The C2 Station employs five 24" LCD monitors for display of JBC2S software [1], MPR software, and live video feeds from unmanned systems. An audio distribution system allows each operator to select between various audio sources including: AN/PRC-117F tactical radio (SINCGARS compatible), audio prompts from JBC2S software, audio from unmanned systems, audio from other operators, and audio from external sources such as an intercom in an adjacent Tactical Operations Center (TOC). A power distribution system provides battery backup for momentary outages. The Ethernet network, audio distribution system, and audio/video feeds are available for use outside the C2 Station.

  5. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  6. Emergent literacy in print and electronic contexts: The influence of book type, narration source, and attention.

    PubMed

    O'Toole, Kathryn J; Kannass, Kathleen N

    2018-09-01

    Young children learn from traditional print books, but there has been no direct comparison of their learning from print books and tablet e-books while controlling for narration source. The current project used a between-subjects design and examined how 4-year-olds (N = 100) learned words and story content from a print book read aloud by a live adult, a print book narrated by an audio device, an e-book read aloud by a live adult, and an e-book narrated by an audio device. Attention to the book and prior experience with tablet e-books were also measured and included in analyses. When controlling for vocabulary, the overall pattern of results revealed that children learned more words from the e-book and from the audio narrator, but story comprehension did not differ as a function of condition. Attention predicted learning, but only in some print book contexts, and significant effects of prior experience did not emerge. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Mixing console design for telematic applications in live performance and remote recording

    NASA Astrophysics Data System (ADS)

    Samson, David J.

    The development of a telematic mixing console addresses audio engineers' need for a fully integrated system architecture that improves efficiency and control for applications such as distributed performance and remote recording. Current systems used in state of the art telematic performance rely on software-based interconnections with complex routing schemes that offer minimal flexibility or control over key parameters needed to achieve a professional workflow. The lack of hardware-based control in the current model limits the full potential of both the engineer and the system. The new architecture provides a full-featured platform that, alongside customary features, integrates (1) surround panning capability for motorized, binaural manikin heads, as well as all sources in the included auralization module, (2) self-labelling channel strips, responsive to change at all remote sites, (3) onboard roundtrip latency monitoring, (4) synchronized remote audio recording and monitoring, and (5) flexible routing. These features combined with robust parameter automation and precise analog control will raise the standard for telematic systems as well as advance the development of networked audio systems for both research and professional audio markets.

  8. Advances in audio source seperation and multisource audio content retrieval

    NASA Astrophysics Data System (ADS)

    Vincent, Emmanuel

    2012-06-01

    Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.

  9. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  10. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  11. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  12. Research on the forward modeling of controlled-source audio-frequency magnetotellurics in three-dimensional axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong

    2017-11-01

    Controlled-source audio-frequency magnetotellurics (CSAMT) has developed rapidly in recent years and are widely used in the area of mineral and oil resource exploration as well as other fields. The current theory, numerical simulation, and inversion research are based on the assumption that the underground media have resistivity isotropy. However a large number of rock and mineral physical property tests show the resistivity of underground media is generally anisotropic. With the increasing application of CSAMT, the demand for probe accuracy of practical exploration to complex targets continues to increase. The question of how to evaluate the influence of anisotropic resistivity to CSAMT response is becoming important. To meet the demand for CSAMT response research of resistivity anisotropic media, this paper examines the CSAMT electric equations, derives and realizes a three-dimensional (3D) staggered-grid finite difference numerical simulation method of CSAMT resistivity axial anisotropy. Through building a two-dimensional (2D) resistivity anisotropy geoelectric model, we validate the 3D computation result by comparing it to the result of controlled-source electromagnetic method (CSEM) resistivity anisotropy 2D finite element program. Through simulating a 3D resistivity axial anisotropy geoelectric model, we compare and analyze the responses of equatorial configuration, axial configuration, two oblique sources and tensor source. The research shows that the tensor source is suitable for CSAMT to recognize the anisotropic effect of underground structure.

  13. WebGL and web audio software lightweight components for multimedia education

    NASA Astrophysics Data System (ADS)

    Chang, Xin; Yuksel, Kivanc; Skarbek, Władysław

    2017-08-01

    The paper presents the results of our recent work on development of contemporary computing platform DC2 for multimedia education usingWebGL andWeb Audio { the W3C standards. Using literate programming paradigm the WEBSA educational tools were developed. It offers for a user (student), the access to expandable collection of WEBGL Shaders and web Audio scripts. The unique feature of DC2 is the option of literate programming, offered for both, the author and the reader in order to improve interactivity to lightweightWebGL andWeb Audio components. For instance users can define: source audio nodes including synthetic sources, destination audio nodes, and nodes for audio processing such as: sound wave shaping, spectral band filtering, convolution based modification, etc. In case of WebGL beside of classic graphics effects based on mesh and fractal definitions, the novel image processing analysis by shaders is offered like nonlinear filtering, histogram of gradients, and Bayesian classifiers.

  14. 47 CFR 11.54 - EAS operation during a National Level emergency.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... emergency, EAS Participants may transmit in lieu of the EAS audio feed an audio feed of the President's voice message from an alternative source, such as a broadcast network audio feed. [77 FR 16705, Mar. 22...

  15. 47 CFR 11.54 - EAS operation during a National Level emergency.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... emergency, EAS Participants may transmit in lieu of the EAS audio feed an audio feed of the President's voice message from an alternative source, such as a broadcast network audio feed. [77 FR 16705, Mar. 22...

  16. 47 CFR 11.54 - EAS operation during a National Level emergency.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... emergency, EAS Participants may transmit in lieu of the EAS audio feed an audio feed of the President's voice message from an alternative source, such as a broadcast network audio feed. [77 FR 16705, Mar. 22...

  17. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  18. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2014-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  19. Ad Hoc Selection of Voice over Internet Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)

    2008-01-01

    A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.

  20. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  1. Audio CAPTCHA for SIP-Based VoIP

    NASA Astrophysics Data System (ADS)

    Soupionis, Yannis; Tountas, George; Gritzalis, Dimitris

    Voice over IP (VoIP) introduces new ways of communication, while utilizing existing data networks to provide inexpensive voice communications worldwide as a promising alternative to the traditional PSTN telephony. SPam over Internet Telephony (SPIT) is one potential source of future annoyance in VoIP. A common way to launch a SPIT attack is the use of an automated procedure (bot), which generates calls and produces audio advertisements. In this paper, our goal is to design appropriate CAPTCHA to fight such bots. We focus on and develop audio CAPTCHA, as the audio format is more suitable for VoIP environments and we implement it in a SIP-based VoIP environment. Furthermore, we suggest and evaluate the specific attributes that audio CAPTCHA should incorporate in order to be effective, and test it against an open source bot implementation.

  2. Multi-channel spatialization systems for audio signals

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor)

    1993-01-01

    Synthetic head related transfer functions (HRTF's) for imposing reprogrammable spatial cues to a plurality of audio input signals included, for example, in multiple narrow-band audio communications signals received simultaneously are generated and stored in interchangeable programmable read only memories (PROM's) which store both head related transfer function impulse response data and source positional information for a plurality of desired virtual source locations. The analog inputs of the audio signals are filtered and converted to digital signals from which synthetic head related transfer functions are generated in the form of linear phase finite impulse response filters. The outputs of the impulse response filters are subsequently reconverted to analog signals, filtered, mixed, and fed to a pair of headphones.

  3. Multi-channel spatialization system for audio signals

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor)

    1995-01-01

    Synthetic head related transfer functions (HRTF's) for imposing reprogramable spatial cues to a plurality of audio input signals included, for example, in multiple narrow-band audio communications signals received simultaneously are generated and stored in interchangeable programmable read only memories (PROM's) which store both head related transfer function impulse response data and source positional information for a plurality of desired virtual source locations. The analog inputs of the audio signals are filtered and converted to digital signals from which synthetic head related transfer functions are generated in the form of linear phase finite impulse response filters. The outputs of the impulse response filters are subsequently reconverted to analog signals, filtered, mixed and fed to a pair of headphones.

  4. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset

    PubMed Central

    Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738

  5. StreaMorph: A Case for Synthesizing Energy-Efficient Adaptive Programs Using High-Level Abstractions

    DTIC Science & Technology

    2013-08-12

    technique when switching from using eight cores to one core. 1. Introduction Real - time streaming of media data is growing in popularity. This includes...both capture and processing of real - time video and audio, and delivery of video and audio from servers; recent usage number shows over 800 million...source of data, when that source is a real - time source, and it is generally not necessary to get ahead of the sink. Even with real - time sources and sinks

  6. Digitizing the Past: A History Book on CD-ROM.

    ERIC Educational Resources Information Center

    Rosenzweig, Roy

    1993-01-01

    Describes the development of an American history book with interactive CD-ROM technology that includes text, pictures, graphs and charts, audio, and film. Topics discussed include the use of HyperCard software to link information; access to primary sources of information; greater student control over learning; and the concept of collaborative…

  7. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun

    2014-04-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.

  8. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  9. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    NASA Astrophysics Data System (ADS)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  10. Investigation on the reproduction performance versus acoustic contrast control in sound field synthesis.

    PubMed

    Bai, Mingsian R; Wen, Jheng-Ciang; Hsu, Hoshen; Hua, Yi-Hsin; Hsieh, Yu-Hao

    2014-10-01

    A sound reconstruction system is proposed for audio reproduction with extended sweet spot and reduced reflections. An equivalent source method (ESM)-based sound field synthesis (SFS) approach, with the aid of dark zone minimization is adopted in the study. Conventional SFS that is based on the free-field assumption suffers from synthesis error due to boundary reflections. To tackle the problem, the proposed system utilizes convex optimization in designing array filters with both reproduction performance and acoustic contrast taken into consideration. Control points are deployed in the dark zone to minimize the reflections from the walls. Two approaches are employed to constrain the pressure and velocity in the dark zone. Pressure matching error (PME) and acoustic contrast (AC) are used as performance measures in simulations and experiments for a rectangular loudspeaker array. Perceptual Evaluation of Audio Quality (PEAQ) is also used to assess the audio reproduction quality. The results show that the pressure-constrained (PC) method yields better acoustic contrast, but poorer reproduction performance than the pressure-velocity constrained (PVC) method. A subjective listening test also indicates that the PVC method is the preferred method in a live room.

  11. Audio Visual Integration with Competing Sources in the Framework of Audio Visual Speech Scene Analysis.

    PubMed

    Ganesh, Attigodu Chandrashekara; Berthommier, Frédéric; Schwartz, Jean-Luc

    2016-01-01

    We introduce "Audio-Visual Speech Scene Analysis" (AVSSA) as an extension of the two-stage Auditory Scene Analysis model towards audiovisual scenes made of mixtures of speakers. AVSSA assumes that a coherence index between the auditory and the visual input is computed prior to audiovisual fusion, enabling to determine whether the sensory inputs should be bound together. Previous experiments on the modulation of the McGurk effect by audiovisual coherent vs. incoherent contexts presented before the McGurk target have provided experimental evidence supporting AVSSA. Indeed, incoherent contexts appear to decrease the McGurk effect, suggesting that they produce lower audiovisual coherence hence less audiovisual fusion. The present experiments extend the AVSSA paradigm by creating contexts made of competing audiovisual sources and measuring their effect on McGurk targets. The competing audiovisual sources have respectively a high and a low audiovisual coherence (that is, large vs. small audiovisual comodulations in time). The first experiment involves contexts made of two auditory sources and one video source associated to either the first or the second audio source. It appears that the McGurk effect is smaller after the context made of the visual source associated to the auditory source with less audiovisual coherence. In the second experiment with the same stimuli, the participants are asked to attend to either one or the other source. The data show that the modulation of fusion depends on the attentional focus. Altogether, these two experiments shed light on audiovisual binding, the AVSSA process and the role of attention.

  12. Method for determining depth and shape of a sub-surface conductive object

    NASA Astrophysics Data System (ADS)

    Lee, D. O.; Montoya, P. C.; Wayland, J. R., Jr.

    1984-06-01

    The depth to and size of an underground object may be determined by sweeping a controlled source audio magnetotelluric (CSAMT) signal and locating a peak response when the receiver spans the edge of the object. The depth of the object is one quarter wavelength in the subsurface media of the frequency of the peak.

  13. Effects of the Presence of Audio and Type of Game Controller on Learning of Rhythmic Accuracy

    ERIC Educational Resources Information Center

    Thomas, James William

    2017-01-01

    "Guitar Hero III" and similar games potentially offer a vehicle for improvement of musical rhythmic accuracy with training delivered in both visual and auditory formats and by use of its novel guitar-shaped interface; however, some theories regarding multimedia learning suggest sound is a possible source of extraneous cognitive load…

  14. Audio Control Handbook For Radio and Television Broadcasting. Third Revised Edition.

    ERIC Educational Resources Information Center

    Oringel, Robert S.

    Audio control is the operation of all the types of sound equipment found in the studios and control rooms of a radio or television station. Written in a nontechnical style for beginners, the book explains thoroughly the operation of all types of audio equipment. Diagrams and photographs of commercial consoles, microphones, turntables, and tape…

  15. Digital Multicasting of Multiple Audio Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell; Bullock, John

    2007-01-01

    The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.

  16. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.

  17. Method for Reading Sensors and Controlling Actuators Using Audio Interfaces of Mobile Devices

    PubMed Central

    Aroca, Rafael V.; Burlamaqui, Aquiles F.; Gonçalves, Luiz M. G.

    2012-01-01

    This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks. PMID:22438726

  18. Method for reading sensors and controlling actuators using audio interfaces of mobile devices.

    PubMed

    Aroca, Rafael V; Burlamaqui, Aquiles F; Gonçalves, Luiz M G

    2012-01-01

    This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks.

  19. Application discussion of source coding standard in voyage data recorder

    NASA Astrophysics Data System (ADS)

    Zong, Yonggang; Zhao, Xiandong

    2018-04-01

    This paper analyzes the disadvantages of the audio and video compression coding technology used by Voyage Data Recorder, and combines the improvement of performance of audio and video acquisition equipment. The thinking of improving the audio and video compression coding technology of the voyage data recorder is proposed, and the feasibility of adopting the new compression coding technology is analyzed from economy and technology two aspects.

  20. Method for determining depth and shape of a sub-surface conductive object

    DOEpatents

    Lee, D.O.; Montoya, P.C.; Wayland, Jr.

    1984-06-27

    The depth to and size of an underground object may be determined by sweeping a controlled source audio magnetotelluric (CSAMT) signal and locating a peak response when the receiver spans the edge of the object. The depth of the object is one quarter wavelength in the subsurface media of the frequency of the peak. 3 figures.

  1. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  2. Audio Motor Training at the Foot Level Improves Space Representation.

    PubMed

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body.

  3. Audio Motor Training at the Foot Level Improves Space Representation

    PubMed Central

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body. PMID:29326564

  4. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  5. Design of control system based on SCM music fountain

    NASA Astrophysics Data System (ADS)

    Li, Biqing; Li, Zhao; Jiang, Suping

    2018-06-01

    The design of the design of a microprocessor controlled by simple circuit, introduced this design applied to the components, and draw the main flow chart presentation. System is the use of an external music source, the intensity of the input audio signal lights will affect the light off, the fountain spray of water level will be based on changes in the lantern light off. This design uses a single-chip system is simple, powerful, good reliability and low cost.

  6. Effectiveness and Comparison of Various Audio Distraction Aids in Management of Anxious Dental Paediatric Patients.

    PubMed

    Navit, Saumya; Johri, Nikita; Khan, Suleman Abbas; Singh, Rahul Kumar; Chadha, Dheera; Navit, Pragati; Sharma, Anshul; Bahuguna, Rachana

    2015-12-01

    Dental anxiety is a widespread phenomenon and a concern for paediatric dentistry. The inability of children to deal with threatening dental stimuli often manifests as behaviour management problems. Nowadays, the use of non-aversive behaviour management techniques is more advocated, which are more acceptable to parents, patients and practitioners. Therefore, this present study was conducted to find out which audio aid was the most effective in the managing anxious children. The aim of the present study was to compare the efficacy of audio-distraction aids in reducing the anxiety of paediatric patients while undergoing various stressful and invasive dental procedures. The objectives were to ascertain whether audio distraction is an effective means of anxiety management and which type of audio aid is the most effective. A total number of 150 children, aged between 6 to 12 years, randomly selected amongst the patients who came for their first dental check-up, were placed in five groups of 30 each. These groups were the control group, the instrumental music group, the musical nursery rhymes group, the movie songs group and the audio stories group. The control group was treated under normal set-up & audio group listened to various audio presentations during treatment. Each child had four visits. In each visit, after the procedures was completed, the anxiety levels of the children were measured by the Venham's Picture Test (VPT), Venham's Clinical Rating Scale (VCRS) and pulse rate measurement with the help of pulse oximeter. A significant difference was seen between all the groups for the mean pulse rate, with an increase in subsequent visit. However, no significant difference was seen in the VPT & VCRS scores between all the groups. Audio aids in general reduced anxiety in comparison to the control group, and the most significant reduction in anxiety level was observed in the audio stories group. The conclusion derived from the present study was that audio distraction was effective in reducing anxiety and audio-stories were the most effective.

  7. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  8. Neural decoding of attentional selection in multi-speaker environments without access to clean sources

    NASA Astrophysics Data System (ADS)

    O'Sullivan, James; Chen, Zhuo; Herrero, Jose; McKhann, Guy M.; Sheth, Sameer A.; Mehta, Ashesh D.; Mesgarani, Nima

    2017-10-01

    Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. Approach. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener’s neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker’s voice to assist the listener. Main results. Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Significance. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.

  9. Effects of Text, Audio and Learner Control on Text-Sound Association and Cognitive Load of EFL Learners

    ERIC Educational Resources Information Center

    Enciso Bernal, Ana Maria

    2014-01-01

    This study investigated the effects of concurrent audio and equivalent onscreen text on the ability of learners of English as a foreign language (EFL) to form associations between textual and aural forms of target vocabulary words. The study also looked at the effects of learner control over an audio sequence on the association of textual and…

  10. Comparing Learning Gains: Audio Versus Text-based Instructor Communication in a Blended Online Learning Environment

    NASA Astrophysics Data System (ADS)

    Shimizu, Dominique

    Though blended course audio feedback has been associated with several measures of course satisfaction at the postsecondary and graduate levels compared to text feedback, it may take longer to prepare and positive results are largely unverified in K-12 literature. The purpose of this quantitative study was to investigate the time investment and learning impact of audio communications with 228 secondary students in a blended online learning biology unit at a central Florida public high school. A short, individualized audio message regarding the student's progress was given to each student in the audio group; similar text-based messages were given to each student in the text-based group on the same schedule; a control got no feedback. A pretest and posttest were employed to measure learning gains in the three groups. To compare the learning gains in two types of feedback with each other and to no feedback, a controlled, randomized, experimental design was implemented. In addition, the creation and posting of audio and text feedback communications were timed in order to assess whether audio feedback took longer to produce than text only feedback. While audio feedback communications did take longer to create and post, there was no difference between learning gains as measured by posttest scores when student received audio, text-based, or no feedback. Future studies using a similar randomized, controlled experimental design are recommended to verify these results and test whether the trend holds in a broader range of subjects, over different time frames, and using a variety of assessment types to measure student learning.

  11. Audio-vocal interaction in single neurons of the monkey ventrolateral prefrontal cortex.

    PubMed

    Hage, Steffen R; Nieder, Andreas

    2015-05-06

    Complex audio-vocal integration systems depend on a strong interconnection between the auditory and the vocal motor system. To gain cognitive control over audio-vocal interaction during vocal motor control, the PFC needs to be involved. Neurons in the ventrolateral PFC (VLPFC) have been shown to separately encode the sensory perceptions and motor production of vocalizations. It is unknown, however, whether single neurons in the PFC reflect audio-vocal interactions. We therefore recorded single-unit activity in the VLPFC of rhesus monkeys (Macaca mulatta) while they produced vocalizations on command or passively listened to monkey calls. We found that 12% of randomly selected neurons in VLPFC modulated their discharge rate in response to acoustic stimulation with species-specific calls. Almost three-fourths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of vocalization. Based on these audio-vocal interactions, the VLPFC might be well positioned to combine higher order auditory processing with cognitive control of the vocal motor output. Such audio-vocal integration processes in the VLPFC might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2015 the authors 0270-6474/15/357030-11$15.00/0.

  12. Audio Visual Technology and the Teaching of Foreign Languages.

    ERIC Educational Resources Information Center

    Halbig, Michael C.

    Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…

  13. Implementing Audio-CASI on Windows’ Platforms

    PubMed Central

    Cooley, Philip C.; Turner, Charles F.

    2011-01-01

    Audio computer-assisted self interviewing (Audio-CASI) technologies have recently been shown to provide important and sometimes dramatic improvements in the quality of survey measurements. This is particularly true for measurements requiring respondents to divulge highly sensitive information such as their sexual, drug use, or other sensitive behaviors. However, DOS-based Audio-CASI systems that were designed and adopted in the early 1990s have important limitations. Most salient is the poor control they provide for manipulating the video presentation of survey questions. This article reports our experiences adapting Audio-CASI to Microsoft Windows 3.1 and Windows 95 platforms. Overall, our Windows-based system provided the desired control over video presentation and afforded other advantages including compatibility with a much wider array of audio devices than our DOS-based Audio-CASI technologies. These advantages came at the cost of increased system requirements --including the need for both more RAM and larger hard disks. While these costs will be an issue for organizations converting large inventories of PCS to Windows Audio-CASI today, this will not be a serious constraint for organizations and individuals with small inventories of machines to upgrade or those purchasing new machines today. PMID:22081743

  14. Satellite sound broadcasting system, portable reception

    NASA Technical Reports Server (NTRS)

    Golshan, Nasser; Vaisnys, Arvydas

    1990-01-01

    Studies are underway at JPL in the emerging area of Satellite Sound Broadcast Service (SSBS) for direct reception by low cost portable, semi portable, mobile and fixed radio receivers. This paper addresses the portable reception of digital broadcasting of monophonic audio with source material band limited to 5 KHz (source audio comparable to commercial AM broadcasting). The proposed system provides transmission robustness, uniformity of performance over the coverage area and excellent frequency reuse. Propagation problems associated with indoor portable reception are considered in detail and innovative antenna concepts are suggested to mitigate these problems. It is shown that, with the marriage of proper technologies a single medium power satellite can provide substantial direct satellite audio broadcast capability to CONUS in UHF or L Bands, for high quality portable indoor reception by low cost radio receivers.

  15. The information content of high-frequency seismograms and the near-surface geologic structure of "hard rock" recording sites

    USGS Publications Warehouse

    Cranswick, E.

    1988-01-01

    Due to hardware developments in the last decade, the high-frequency end of the frequency band of seismic waves analyzed for source mechanisms has been extended into the audio-frequency range (>20 Hz). In principle, the short wavelengths corresponding to these frequencies can provide information about the details of seismic sources, but in fact, much of the "signal" is the site response of the nearsurface. Several examples of waveform data recorded at "hard rock" sites, which are generally assumed to have a "flat" transfer function, are presented to demonstrate the severe signal distortions, including fmax, produced by near-surface structures. Analysis of the geology of a number of sites indicates that the overall attenuation of high-frequency (>1 Hz) seismic waves is controlled by the whole-path-Q between source and receiver but the presence of distinct fmax site resonance peaks is controlled by the nature of the surface layer and the underlying near-surface structure. Models of vertical decoupling of the surface and nearsurface and horizontal decoupling of adjacent sites on hard rock outcrops are proposed and their behaviour is compared to the observations of hard rock site response. The upper bound to the frequency band of the seismic waves that contain significant source information which can be deconvolved from a site response or an array response is discussed in terms of fmax and the correlation of waveform distortion with the outcrop-scale geologic structure of hard rock sites. It is concluded that although the velocity structures of hard rock sites, unlike those of alluvium sites, allow some audio-frequency seismic energy to propagate to the surface, the resulting signals are a highly distorted, limited subset of the source spectra. ?? 1988 Birkha??user Verlag.

  16. Single-sensor multispeaker listening with acoustic metamaterials

    PubMed Central

    Xie, Yangbo; Tsai, Tsung-Han; Konneker, Adam; Popa, Bogdan-Ioan; Brady, David J.; Cummer, Steven A.

    2015-01-01

    Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications. PMID:26261314

  17. Flow control using audio tones in resonant microfluidic networks: towards cell-phone controlled lab-on-a-chip devices.

    PubMed

    Phillips, Reid H; Jain, Rahil; Browning, Yoni; Shah, Rachana; Kauffman, Peter; Dinh, Doan; Lutz, Barry R

    2016-08-16

    Fluid control remains a challenge in development of portable lab-on-a-chip devices. Here, we show that microfluidic networks driven by single-frequency audio tones create resonant oscillating flow that is predicted by equivalent electrical circuit models. We fabricated microfluidic devices with fluidic resistors (R), inductors (L), and capacitors (C) to create RLC networks with band-pass resonance in the audible frequency range available on portable audio devices. Microfluidic devices were fabricated from laser-cut adhesive plastic, and a "buzzer" was glued to a diaphragm (capacitor) to integrate the actuator on the device. The AC flowrate magnitude was measured by imaging oscillation of bead tracers to allow direct comparison to the RLC circuit model across the frequency range. We present a systematic build-up from single-channel systems to multi-channel (3-channel) networks, and show that RLC circuit models predict complex frequency-dependent interactions within multi-channel networks. Finally, we show that adding flow rectifying valves to the network creates pumps that can be driven by amplified and non-amplified audio tones from common audio devices (iPod and iPhone). This work shows that RLC circuit models predict resonant flow responses in multi-channel fluidic networks as a step towards microfluidic devices controlled by audio tones.

  18. Characterization of Clastic Dikes Using Controlled Source Audio Magnetotellurics

    NASA Astrophysics Data System (ADS)

    Persichetti, J. A.; Alumbaugh, D.

    2001-12-01

    A site consisting of 3D geology on the Hanford Reservation in Hanford, Washington, has been surveyed using Controlled Source Audio Magnetotellurics (CSAMT) to determine the method's ability to detect clastic dikes. The dikes are fine-grained, soft-sediment intrusions, formed by the buoyant rise of buried, unconsolidated, water rich mud into overlying unconsolidated sediment. The dikes are of major importance because they may act as natural barriers inhibiting the spread of contaminants, or as conduits, allowing the contaminants to be quickly wicked away from the contaminant storage tanks that may be located in close vicinity of the dikes. The field setup consisted of a 33 meter by 63 meter receiver grid with 3 meter spacing in all directions with the transmitter positioned 71.5 meters from the center of the receiver grid. A total of 12 frequencies were collected from 1.1kHz to 66.2kHz. The CSAMT data is being analyzed using a 2D CSAMT RRI code (Lu, Unsworth and Booker, 1999) and a 2D MT RRI code (Smith and Booker, 1991). Of interest is examining how well the 2D codes are able to map 3D geology, the level of resolution that is obtained, and how important it is to include the 3D source in the solution. The ultimate goal is to determine the applicability of using CSAMT for mapping these types of features at the Hanford Reservation site.

  19. The priming function of in-car audio instruction.

    PubMed

    Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh

    2018-05-01

    Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.

  20. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    PubMed

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  1. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  2. Comparing Audio and Video Data for Rating Communication

    PubMed Central

    Williams, Kristine; Herman, Ruth; Bontempo, Daniel

    2013-01-01

    Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with ICC (2,1) for audio = .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio recorded data should be evaluated in designing studies evaluating nursing care. PMID:23579475

  3. Using a new, free spectrograph program to critically investigate acoustics

    NASA Astrophysics Data System (ADS)

    Ball, Edward; Ruiz, Michael J.

    2016-11-01

    We have developed an online spectrograph program with a bank of over 30 audio clips to visualise a variety of sounds. Our audio library includes everyday sounds such as speech, singing, musical instruments, birds, a baby, cat, dog, sirens, a jet, thunder, and screaming. We provide a link to a video of the sound sources superimposed with their respective spectrograms in real time. Readers can use our spectrograph program to view our library, open their own desktop audio files, and use the program in real time with a computer microphone.

  4. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  5. Worldwide survey of direct-to-listener digital audio delivery systems development since WARC-1992

    NASA Technical Reports Server (NTRS)

    Messer, Dion D.

    1993-01-01

    Each country was allocated frequency band(s) for direct-to-listener digital audio broadcasting at WARC-92. These allocations were near 1500, 2300, and 2600 MHz. In addition, some countries are encouraging the development of digital audio broadcasting services for terrestrial delivery only in the VHF bands (at frequencies from roughly 50 to 300 MHz) and in the medium-wave broadcasting band (AM band) (from roughly 0.5 to 1.7 MHz). The development activity increase was explosive. Current development, as of February 1993, as it is known to the author is summarized. The information given includes the following characteristics, as appropriate, for each planned system: coverage areas, audio quality, number of audio channels, delivery via satellite/terrestrial or both, carrier frequency bands, modulation methods, source coding, and channel coding. Most proponents claim that they will be operational in 3 or 4 years.

  6. TECHNICAL NOTE: Portable audio electronics for impedance-based measurements in microfluidics

    NASA Astrophysics Data System (ADS)

    Wood, Paul; Sinton, David

    2010-08-01

    We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1-50 mM), flow rate (2-120 µL min-1) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ~10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems.

  7. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  8. Audio-Enhanced Computer Assisted Learning and Computer Controlled Audio-Instruction.

    ERIC Educational Resources Information Center

    Miller, K.; And Others

    1983-01-01

    Describes aspects of use of a microcomputer linked with a cassette recorder as a peripheral to enhance computer-assisted learning (CAL) and a microcomputer-controlled tape recorder linked with a microfiche reader in a commercially available teaching system. References and a listing of control programs are appended. (EJS)

  9. Comparing audio and video data for rating communication.

    PubMed

    Williams, Kristine; Herman, Ruth; Bontempo, Daniel

    2013-09-01

    Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with Interclass Correlation Coefficient (ICC) (2,1) for audio .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio-recorded data should be evaluated in designing studies evaluating nursing care.

  10. Supervision in Language Teaching: A Supervisor's and Three Trainee Teachers' Perspectives

    ERIC Educational Resources Information Center

    Kahyalar, Eda; Yazici, lkay Çelik

    2016-01-01

    This article reports on the findings from a study which investigated supervision in language teaching from a supervisor's and her three trainee teachers' perspectives. The data in the study were from three sources: 1) audio recordings of the supervisor's feedback sessions with each trainee teacher, 2) audio recording of an interview between the…

  11. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    NASA Astrophysics Data System (ADS)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio-magnetotelluric time-series, providing the means to assess quality of response functions obtained through processing.

  12. Fuzzy Logic-Based Audio Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, M.

    2008-11-01

    Audio and audio-pattern recognition is becoming one of the most important technologies to automatically control embedded systems. Fuzzy logic may be the most important enabling methodology due to its ability to rapidly and economically model such application. An audio and audio-pattern recognition engine based on fuzzy logic has been developed for use in very low-cost and deeply embedded systems to automate human-to-machine and machine-to-machine interaction. This engine consists of simple digital signal-processing algorithms for feature extraction and normalization, and a set of pattern-recognition rules manually tuned or automatically tuned by a self-learning process.

  13. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  14. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  15. Audio Tracking in Noisy Environments by Acoustic Map and Spectral Signature.

    PubMed

    Crocco, Marco; Martelli, Samuele; Trucco, Andrea; Zunino, Andrea; Murino, Vittorio

    2018-05-01

    A novel method is proposed for generic target tracking by audio measurements from a microphone array. To cope with noisy environments characterized by persistent and high energy interfering sources, a classification map (CM) based on spectral signatures is calculated by means of a machine learning algorithm. Next, the CM is combined with the acoustic map, describing the spatial distribution of sound energy, in order to obtain a cleaned joint map in which contributions from the disturbing sources are removed. A likelihood function is derived from this map and fed to a particle filter yielding the target location estimation on the acoustic image. The method is tested on two real environments, addressing both speaker and vehicle tracking. The comparison with a couple of trackers, relying on the acoustic map only, shows a sharp improvement in performance, paving the way to the application of audio tracking in real challenging environments.

  16. Assessment of current and proposed audio alarms in terminal air traffic control.

    DOT National Transportation Integrated Search

    2000-09-01

    The National Airspace System Human Factors Branch (ACT-530) has been engaged in research on the characteristics and use of audio : alerts and alarms in Air Traffic Control. In support of this program, Federal Data Corporation performed a comparative ...

  17. INSPIRE

    NASA Technical Reports Server (NTRS)

    Taylor, Bill; Pine, Bill

    2003-01-01

    INSPIRE (Interactive NASA Space Physics Ionosphere Radio Experiment - http://image.gsfc.nasa.gov/poetry/inspire) is a non-profit scientific, educational organization whose objective is to bring the excitement of observing natural and manmade radio waves in the audio region to high school students and others. The project consists of building an audio frequency radio receiver kit, making observations of natural and manmade radio waves and analyzing the data. Students also learn about NASA and our natural environment through the study of lightning, the source of many of the audio frequency waves, the atmosphere, the ionosphere, and the magnetosphere where the waves travel.

  18. Bayesian Tracking within a Feedback Sensing Environment: Estimating Interacting, Spatially Constrained Complex Dynamical Systems from Multiple Sources of Controllable Devices

    DTIC Science & Technology

    2014-07-25

    composition of simple temporal structures to a speaker diarization task with the goal of segmenting conference audio in the presence of an unknown number of...application domains including neuroimaging, diverse document selection, speaker diarization , stock modeling, and target tracking. We detail each of...recall performance than competing methods in a task of discovering articles preferred by the user • a gold-standard speaker diarization method, as

  19. Design of batch audio/video conversion platform based on JavaEE

    NASA Astrophysics Data System (ADS)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  20. Use of Video and Audio Texts in EFL Listening Test

    ERIC Educational Resources Information Center

    Basal, Ahmet; Gülözer, Kaine; Demir, Ibrahim

    2015-01-01

    The study aims to discover whether audio or video modality in a listening test is more beneficial to test takers. In this study, the posttest-only control group design was utilized and quantitative data were collected in order to measure participant performances concerning two types of modality (audio or video) in a listening test. The…

  1. Comparative evaluation of the effectiveness of audio and audiovisual distraction aids in the management of anxious pediatric dental patients.

    PubMed

    Kaur, Rajwinder; Jindal, Ritu; Dua, Rohini; Mahajan, Sandeep; Sethi, Kunal; Garg, Sunny

    2015-01-01

    The aim of this study was to evaluate and compare audio and audiovisual distraction aids in management of anxious pediatric dental patients of different age groups and to study children's response to sequential dental visits with the use of distraction aids. This study was conducted on two age groups, that is, 4-6 years and 6-8 years with 30 patients in each age group on their first dental visit. The children of both the age groups were divided into 3 subgroups, the control group, audio distraction group, audiovisual distraction group with 10 patients in each subgroup. Each child in all the subgroups had gone through three dental visits. Child anxiety level at each visit was assessed by using a combination of anxiety measuring parameters. The data collected was tabulated and subjected to statistical analysis. Tukey honest significant difference post-hoc test at 0.05% level of significance revealed audiovisual group showed statistically highly significant difference from audio and control group, whereas audio group showed the statistically significant difference from the control group. Audiovisual distraction was found to be a more effective mode of distraction in the management of anxious children in both the age groups when compared to audio distraction. In both the age groups, a significant effect of the visit type was also observed.

  2. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  3. Full Mesh Audio Conferencing Using the Point-to-Multipoint On-Board Switching Capability of ACTS

    NASA Technical Reports Server (NTRS)

    Rivett, Mary L.; Sethna, Zubin H.

    1996-01-01

    The purpose of this paper is to describe an implementation of audio conferencing using the ACTS T1-VSAT network. In particular, this implementation evaluates the use of the on-board switching capability of the satellite as a viable alternative for providing the multipoint connectivity normally provided by terrestrial audio bridge equipment The system that was implemented provides full mesh, full-duplex audio conferencing, with end-to-end voice paths between all participants requiring only a single hop (i.e. 250 msec. delay). Moreover, it addresses the lack of spontaneity in current systems by allowing a user to easily start a conference from any standard telephone handset connected to an ACTS earth station, and quickly add new members to the conference at any time using the 'hook flash' capability. No prior scheduling of resources is required and there is no central point of control, thereby providing the user with the spontaneity desired in audio conference control.

  4. Audio signal processor

    NASA Technical Reports Server (NTRS)

    Hymer, R. L.

    1970-01-01

    System provides automatic volume control for an audio amplifier or a voice communication system without introducing noise surges during pauses in the input, and without losing the initial signal when the input resumes.

  5. Power-output regularization in global sound equalization.

    PubMed

    Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn

    2008-01-01

    The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.

  6. Acoustic Calibration of the Exterior Effects Room at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Klos, Jacob; Chapin, William L.; Surucu, Fahri; Aumann, Aric R.

    2010-01-01

    The Exterior Effects Room (EER) at the NASA Langley Research Center is a 39-seat auditorium built for psychoacoustic studies of aircraft community noise. The original reproduction system employed monaural playback and hence lacked sound localization capability. In an effort to more closely recreate field test conditions, a significant upgrade was undertaken to allow simulation of a three-dimensional audio and visual environment. The 3D audio system consists of 27 mid and high frequency satellite speakers and 4 subwoofers, driven by a real-time audio server running an implementation of Vector Base Amplitude Panning. The audio server is part of a larger simulation system, which controls the audio and visual presentation of recorded and synthesized aircraft flyovers. The focus of this work is on the calibration of the 3D audio system, including gains used in the amplitude panning algorithm, speaker equalization, and absolute gain control. Because the speakers are installed in an irregularly shaped room, the speaker equalization includes time delay and gain compensation due to different mounting distances from the focal point, filtering for color compensation due to different installations (half space, corner, baffled/unbaffled), and cross-over filtering.

  7. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  8. The impact of modality and working memory capacity on achievement in a multimedia environment

    NASA Astrophysics Data System (ADS)

    Stromfors, Charlotte M.

    This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.

  9. Design and implementation of an audio indicator

    NASA Astrophysics Data System (ADS)

    Zheng, Shiyong; Li, Zhao; Li, Biqing

    2017-04-01

    This page proposed an audio indicator which designed by using C9014, LED by operational amplifier level indicator, the decimal count/distributor of CD4017. The experimental can control audibly neon and holiday lights through the signal. Input audio signal after C9014 composed of operational amplifier for power amplifier, the adjust potentiometer extraction amplification signal input voltage CD4017 distributors make its drive to count, then connect the LED display running situation of the circuit. This simple audio indicator just use only U1 and can produce two colors LED with the audio signal tandem come pursuit of the running effect, from LED display the running of the situation takes can understand the general audio signal. The variation in the audio and the frequency of the signal and the corresponding level size. In this light can achieve jump to change, slowly, atlas, lighting four forms, used in home, hotel, discos, theater, advertising and other fields, and a wide range of USES, rU1h life in a modern society.

  10. Acoustic signal recovery by thermal demodulation

    NASA Astrophysics Data System (ADS)

    Boullosa, R. R.; Santillán, Arturo O.

    2006-10-01

    One operating mode of recently developed thermoacoustic transducers is as an audio speaker that uses an input superimposed on a direct current; as a result, the audio signal occurs at the same frequency as the input signal. To extend the potential applications of these kinds of sources, the authors propose an alternative driving mode in which a simple thermoacoustic device, consisting of a metal film over a substrate and a heat sink, is excited with a high frequency sinusoid that is amplitude modulated by a lower frequency signal. They show that the modulating signal is recovered in the radiated waves due to a mechanism that is inherent to this type of thermoacoustic process. If the frequency of the carrier is higher than 30kHz and any modulating signal (the one of interest) is in the audio frequency range, only this signal will be heard. Thus, the thermoacoustic device operates as an audio-band, self-demodulating speaker.

  11. Multi-geophysical approaches to detect karst channels underground - A case study in Mengzi of Yunnan Province, China

    NASA Astrophysics Data System (ADS)

    Gan, Fuping; Han, Kai; Lan, Funing; Chen, Yuling; Zhang, Wei

    2017-01-01

    Mengzi locates in the south 20 km away from the outlet of Nandong subsurface river, and has been suffering from water deficiency in recent years. It is necessary to find out the water resources underground according to the geological characteristics such as the positions and buried depths of the underground river to improve the civil and industrial environments. Due to the adverse factors such as topographic relief, bare rocks in karst terrains, the geophysical approaches, such as Controlled Source Audio Magnetotellurics and Seismic Refraction Tomography, were used to roughly identify faults and fracture zones by the geophysical features of low resistivity and low velocity, and then used the mise-a-la-masse method to judge which faults and fracture zones should be the potential channels of the subsurface river. Five anomalies were recognized along the profile of 2.4 km long and showed that the northeast river system has several branches. Drilling data have proved that the first borehole indicated a water bearing channel by a characteristics of rock core of river sands and gravels deposition, the second one encountered water-filled fracture zone with abundant water, and the third one exposed mud-filled fracture zone without sustainable water. The results from this case study show that the combination of Controlled Source Audio Magnetotellurics, Seismic Refraction Tomography and mise-a-la-Masse is one of the effective methods to detect water-filled channels or fracture zones in karst terrains.

  12. Astronaut Garneau working with Audio Control System panel

    NASA Image and Video Library

    1996-06-05

    STS077-392-007 (19-29 May 1996) --- Inside the Spacehab Module onboard the Earth-orbiting Space Shuttle Endeavour, Canadian astronaut Marc Garneau, mission specialist, joins astronaut Curtis L. Brown, Jr., pilot, in checking out the audio control system for Spacehab. The two joined four other NASA astronauts for nine days of research and experimentation in Earth-orbit.

  13. Aural Communication in Aviation.

    DTIC Science & Technology

    1981-06-01

    of standards. f. Audio Warnings and Controls Voice versus tone warnings. Design of highly descriminative audio warnings. Optimum number of warnings to...EIGHT TABLE 1 Experimental Procedure The present studies were designed so that each subject served as his/her own control , i.e., each subject... controller is experienced and the message is unexpected, and especially if one or both of them are non -native speakers of English. This should be taken

  14. Supervisory Control of Unmanned Vehicles

    DTIC Science & Technology

    2010-04-01

    than-ideal video quality (Chen et al., 2007; Chen and Thropp, 2007). Simpson et al. (2004) proposed using a spatial audio display to augment UAV...operator’s SA and discussed its utility for each of the three SA levels. They recommended that both visual and spatial audio information should be...presented concurrently. They also suggested that presenting the audio information spatially may enhance UAV operator’s sense of presence (i.e

  15. Reduction in time-to-sleep through EEG based brain state detection and audio stimulation.

    PubMed

    Zhuo Zhang; Cuntai Guan; Ti Eu Chan; Juanhong Yu; Aung Aung Phyo Wai; Chuanchu Wang; Haihong Zhang

    2015-08-01

    We developed an EEG- and audio-based sleep sensing and enhancing system, called iSleep (interactive Sleep enhancement apparatus). The system adopts a closed-loop approach which optimizes the audio recording selection based on user's sleep status detected through our online EEG computing algorithm. The iSleep prototype comprises two major parts: 1) a sleeping mask integrated with a single channel EEG electrode and amplifier, a pair of stereo earphones and a microcontroller with wireless circuit for control and data streaming; 2) a mobile app to receive EEG signals for online sleep monitoring and audio playback control. In this study we attempt to validate our hypothesis that appropriate audio stimulation in relation to brain state can induce faster onset of sleep and improve the quality of a nap. We conduct experiments on 28 healthy subjects, each undergoing two nap sessions - one with a quiet background and one with our audio-stimulation. We compare the time-to-sleep in both sessions between two groups of subjects, e.g., fast and slow sleep onset groups. The p-value obtained from Wilcoxon Signed Rank Test is 1.22e-04 for slow onset group, which demonstrates that iSleep can significantly reduce the time-to-sleep for people with difficulty in falling sleep.

  16. Reducing audio stimulus presentation latencies across studies, laboratories, and hardware and operating system configurations.

    PubMed

    Babjack, Destiny L; Cernicky, Brandon; Sobotka, Andrew J; Basler, Lee; Struthers, Devon; Kisic, Richard; Barone, Kimberly; Zuccolotto, Anthony P

    2015-09-01

    Using differing computer platforms and audio output devices to deliver audio stimuli often introduces (1) substantial variability across labs and (2) variable time between the intended and actual sound delivery (the sound onset latency). Fast, accurate audio onset latencies are particularly important when audio stimuli need to be delivered precisely as part of studies that depend on accurate timing (e.g., electroencephalographic, event-related potential, or multimodal studies), or in multisite studies in which standardization and strict control over the computer platforms used is not feasible. This research describes the variability introduced by using differing configurations and introduces a novel approach to minimizing audio sound latency and variability. A stimulus presentation and latency assessment approach is presented using E-Prime and Chronos (a new multifunction, USB-based data presentation and collection device). The present approach reliably delivers audio stimuli with low latencies that vary by ≤1 ms, independent of hardware and Windows operating system (OS)/driver combinations. The Chronos audio subsystem adopts a buffering, aborting, querying, and remixing approach to the delivery of audio, to achieve a consistent 1-ms sound onset latency for single-sound delivery, and precise delivery of multiple sounds that achieves standard deviations of 1/10th of a millisecond without the use of advanced scripting. Chronos's sound onset latencies are small, reliable, and consistent across systems. Testing of standard audio delivery devices and configurations highlights the need for careful attention to consistency between labs, experiments, and multiple study sites in their hardware choices, OS selections, and adoption of audio delivery systems designed to sidestep the audio latency variability issue.

  17. High-Resolution Audio with Inaudible High-Frequency Components Induces a Relaxed Attentional State without Conscious Awareness.

    PubMed

    Kuribayashi, Ryuma; Nittono, Hiroshi

    2017-01-01

    High-resolution audio has a higher sampling frequency and a greater bit depth than conventional low-resolution audio such as compact disks. The higher sampling frequency enables inaudible sound components (above 20 kHz) that are cut off in low-resolution audio to be reproduced. Previous studies of high-resolution audio have mainly focused on the effect of such high-frequency components. It is known that alpha-band power in a human electroencephalogram (EEG) is larger when the inaudible high-frequency components are present than when they are absent. Traditionally, alpha-band EEG activity has been associated with arousal level. However, no previous studies have explored whether sound sources with high-frequency components affect the arousal level of listeners. The present study examined this possibility by having 22 participants listen to two types of a 400-s musical excerpt of French Suite No. 5 by J. S. Bach (on cembalo, 24-bit quantization, 192 kHz A/D sampling), with or without inaudible high-frequency components, while performing a visual vigilance task. High-alpha (10.5-13 Hz) and low-beta (13-20 Hz) EEG powers were larger for the excerpt with high-frequency components than for the excerpt without them. Reaction times and error rates did not change during the task and were not different between the excerpts. The amplitude of the P3 component elicited by target stimuli in the vigilance task increased in the second half of the listening period for the excerpt with high-frequency components, whereas no such P3 amplitude change was observed for the other excerpt without them. The participants did not distinguish between these excerpts in terms of sound quality. Only a subjective rating of inactive pleasantness after listening was higher for the excerpt with high-frequency components than for the other excerpt. The present study shows that high-resolution audio that retains high-frequency components has an advantage over similar and indistinguishable digital sound sources in which such components are artificially cut off, suggesting that high-resolution audio with inaudible high-frequency components induces a relaxed attentional state without conscious awareness.

  18. Audio-video decision support for patients: the documentary genré as a basis for decision aids.

    PubMed

    Volandes, Angelo E; Barry, Michael J; Wood, Fiona; Elwyn, Glyn

    2013-09-01

    Decision support tools are increasingly using audio-visual materials. However, disagreement exists about the use of audio-visual materials as they may be subjective and biased. This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio-visual materials. Three concerns arising from documentary film studies as they apply to the use of audio-visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio-visual materials (selection bias) and how to ensure objectivity (editorial bias). Decision science needs to start a debate about how audio-visual materials are to be used in decision support tools. Simply because audio-visual materials may be subjective and open to bias does not mean that we should not use them. Methods need to be found to ensure consensus around balance and editorial control, such that audio-visual materials can be used. © 2011 John Wiley & Sons Ltd.

  19. Multimodal integration of micro-Doppler sonar and auditory signals for behavior classification with convolutional networks.

    PubMed

    Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas

    2013-10-01

    The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance.

  20. Spatialized audio improves call sign recognition during multi-aircraft control.

    PubMed

    Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J

    2018-07-01

    We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.

  1. Formal Verification of a Power Controller Using the Real-Time Model Checker UPPAAL

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Larsen, Kim Guldstrand; Skou, Arne

    1999-01-01

    A real-time system for power-down control in audio/video components is modeled and verified using the real-time model checker UPPAAL. The system is supposed to reside in an audio/video component and control (read from and write to) links to neighbor audio/video components such as TV, VCR and remote-control. In particular, the system is responsible for the powering up and down of the component in between the arrival of data, and in order to do so in a safe way without loss of data, it is essential that no link interrupts are lost. Hence, a component system is a multitasking system with hard real-time requirements, and we present techniques for modeling time consumption in such a multitasked, prioritized system. The work has been carried out in a collaboration between Aalborg University and the audio/video company B&O. By modeling the system, 3 design errors were identified and corrected, and the following verification confirmed the validity of the design but also revealed the necessity for an upper limit of the interrupt frequency. The resulting design has been implemented and it is going to be incorporated as part of a new product line.

  2. Modeling and analysis of CSAMT field source effect and its characteristics

    NASA Astrophysics Data System (ADS)

    Da, Lei; Xiaoping, Wu; Qingyun, Di; Gang, Wang; Xiangrong, Lv; Ruo, Wang; Jun, Yang; Mingxin, Yue

    2016-02-01

    Controlled-source audio-frequency magnetotellurics (CSAMT) has been a highly successful geophysical tool used in a variety of geological exploration studies for many years. However, due to the artificial source used in the CSAMT technique, two important factors are considered during interpretation: non-plane-wave or geometric effects and source overprint effects. Hence, in this paper we simulate the source overprint effects and analyzed the rule and characteristics of its influence on CSAMT applications. Two-dimensional modeling was carried out using an adaptive unstructured finite element method to simulate several typical models. Also, we summarized the characteristics and rule of the source overprint effects and analyzed its influence on the data taken over several mining areas. The results obtained from the study shows that the occurrence and strength of the source overprint effect is dependent on the location of the source dipole, in relation to the receiver and the subsurface geology. In order to avoid source overprint effects, three principle were suggested to determine the best location for the grounded dipole source in the field.

  3. Energy Use of Home Audio Products in the U.S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosen, K.B.; Meier, A.K.

    1999-12-01

    We conducted a bottom-up analysis using stock and usage estimates from secondary sources, and our own power measurements. We measured power levels of the most common audio products in their most commonly used operating modes. We found that the combined energy consumption of standby, idle, and play modes of clock radios, portable stereos, compact stereos, and component stereos was 20 TWh/yr, representing about 1.8% of the 1998 national residential electricity consumption.

  4. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    PubMed

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.

  5. Fiber-channel audio video standard for military and commercial aircraft product lines

    NASA Astrophysics Data System (ADS)

    Keller, Jack E.

    2002-08-01

    Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.

  6. Message Modality and Source Credibility Can Interact to Affect Argument Processing.

    ERIC Educational Resources Information Center

    Booth-Butterfield, Steve; Gutowski, Christine

    1993-01-01

    Extends previous modality and source cue studies by manipulating argument quality. Randomly assigned college students by class to an argument quality by source attribute by modality factorial experiment. Finds the print mode produces only argument main effects, and audio and video modes produce argument by cue interactions. Finds data inconsistent…

  7. Investigating health information needs of community radio stations and applying the World Wide Web to disseminate audio products.

    PubMed

    Snyders, Janus; van Wyk, Elmarie; van Zyl, Hendra

    2010-01-01

    The Web and Media Technologies Platform (WMTP) of the South African Medical Research Council (MRC) conducted a pilot project amongst community radio stations in South Africa. Based on previous research done in Africa WMTP investigated the following research question: How reliable is the content of health information broadcast by community radio stations? The main objectives of the project were to determine the 1) intervals of health slots on community radio stations, 2) sources used by community radio stations for health slots, 3) type of audio products needed for health slots, and 4) to develop a user friendly Web site in response to the stations' needs for easy access to audio material on health information.

  8. The role of laryngoscopy in the diagnosis of spasmodic dysphonia.

    PubMed

    Daraei, Pedram; Villari, Craig R; Rubin, Adam D; Hillel, Alexander T; Hapner, Edie R; Klein, Adam M; Johns, Michael M

    2014-03-01

    Spasmodic dysphonia (SD) can be difficult to diagnose, and patients often see multiple physicians for many years before diagnosis. Improving the speed of diagnosis for individuals with SD may decrease the time to treatment and improve patient quality of life more quickly. To assess whether the diagnosis of SD can be accurately predicted through auditory cues alone without the assistance of visual cues offered by laryngoscopic examination. Single-masked, case-control study at a specialized referral center that included patients who underwent laryngoscopic examination as part of a multidisciplinary workup for dysphonia. Twenty-two patients were selected in total: 10 with SD, 5 with vocal tremor, and 7 controls without SD or vocal tremor. The laryngoscopic examination was recorded, deidentified, and edited to make 3 media clips for each patient: video alone, audio alone, and combined video and audio. These clips were randomized and presented to 3 fellowship-trained laryngologist raters (A.D.R., A.T.H., and A.M.K.), who established the most probable diagnosis for each clip. Intrarater and interrater reliability were evaluated using repeat clips incorporated in the presentations. We measured diagnostic accuracy for video-only, audio-only, and combined multimedia clips. These measures were established before data collection. Data analysis was accomplished with analysis of variance and Tukey honestly significant differences. Of patients with SD, diagnostic accuracy was 10%, 73%, and 73% for video-only, audio-only, and combined, respectively (P < .001, df = 2). Of patients with vocal tremor, diagnostic accuracy was 93%, 73%, and 100% for video-only, audio-only, and combined, respectively (P = .05, df = 2). Of the controls, diagnostic accuracy was 81%, 19%, and 62% for video-only, audio-only, and combined, respectively (P < .001, df = 2). The diagnosis of SD during examination is based primarily on auditory cues. Viewing combined audio and video clips afforded no change in diagnostic accuracy compared with audio alone. Laryngoscopy serves an important role in the diagnosis of SD by excluding other pathologic causes and identifying vocal tremor.

  9. Bat detective-Deep learning tools for bat acoustic signal detection.

    PubMed

    Mac Aodha, Oisin; Gibb, Rory; Barlow, Kate E; Browning, Ella; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R; Newson, Stuart E; Pandourski, Ivan; Parsons, Stuart; Russ, Jon; Szodoray-Paradi, Abigel; Szodoray-Paradi, Farkas; Tilova, Elena; Girolami, Mark; Brostow, Gabriel; Jones, Kate E

    2018-03-01

    Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio.

  10. Bat detective—Deep learning tools for bat acoustic signal detection

    PubMed Central

    Barlow, Kate E.; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R.; Newson, Stuart E.; Pandourski, Ivan; Russ, Jon; Szodoray-Paradi, Abigel; Tilova, Elena; Girolami, Mark; Jones, Kate E.

    2018-01-01

    Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio. PMID:29518076

  11. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  12. Stress Reduction through Audio Distraction in Anxious Pediatric Dental Patients: An Adjunctive Clinical Study.

    PubMed

    Singh, Divya; Samadi, Firoza; Jaiswal, Jn; Tripathi, Abhay Mani

    2014-01-01

    The purpose of the present study was to evaluate the eff-cacy of 'audio distraction' in anxious pediatric dental patients. Sixty children were randomly selected and equally divided into two groups of thirty each. The first group was control group (group A) and the second group was music group (group B). The dental procedure employed was extraction for both the groups. The children included in music group were allowed to hear audio presentation throughout the treatment procedure. Anxiety was measured by using Venham's picture test, pulse rate, blood pressure and oxygen saturation. 'Audio distraction' was found efficacious in alleviating anxiety of pediatric dental patients. 'Audio distraction' did decrease the anxiety in pediatric patients to a significant extent. How to cite this article: Singh D, Samadi F, Jaiswal JN, Tripathi AM. Stress Reduction through Audio Distraction in Anxious Pediatric Dental Patients: An Adjunctive Clinical Study. Int J Clin Pediatr Dent 2014;7(3):149-152.

  13. Consultation audio-recording reduces long-term decision regret after prostate cancer treatment: A non-randomised comparative cohort study.

    PubMed

    Good, Daniel W; Delaney, Harry; Laird, Alexander; Hacking, Belinda; Stewart, Grant D; McNeill, S Alan

    2016-12-01

    The life expectancy of prostate patients is long and patients will spend many years carrying the burdens & benefits of the treatment decisions they have made, therefore, it is vital that decisions on treatments are shared between patient and physician. The objective was to determine if consultation audio-recording improves quality of life, reduces regret or improves patient satisfaction in comparison to standard counselling. In 2012 we initiated consultation audio-recordings, where patients are given a CD of their consultation to keep and replay at home. We conducted a prospective non-randomised study of patient satisfaction, quality of life (QOL) and decision regret at 12 months follow-up using posted validated questionnaires for the audio-recording (AR) patients and a control cohort. Qualitative and thematic analyses were used. Forty of 59 patients in the AR group, and 27 of 45 patients in the control group returned the questionnaires. Patient demographics were similar in both groups with no statistically significant differences between the two groups. Decision regret was lower in the audio-recording group (11/100) vs control group (19/100) (p = 0.04). The risk ratio for not having any long-term decision regret was 5.539 (CI 1.643-18.674), with NNT to prevent regret being 4. Regression analysis showed that receiving audio-recording was strongest predictor for absence of regret even greater than potency and incontinence. The study has shown that audio-recording clinic consultation reduces long-term decision regret, increases patient information recall, understanding and confidence in their decision. There is great potential for further expansion of this low-cost intervention. Copyright © 2014 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  14. The average direct current offset values for small digital audio recorders in an acoustically consistent environment.

    PubMed

    Koenig, Bruce E; Lacey, Douglas S

    2014-07-01

    In this research project, nine small digital audio recorders were tested using five sets of 30-min recordings at all available recording modes, with consistent audio material, identical source and microphone locations, and identical acoustic environments. The averaged direct current (DC) offset values and standard deviations were measured for 30-sec and 1-, 2-, 3-, 6-, 10-, 15-, and 30-min segments. The research found an inverse association between segment lengths and the standard deviation values and that lengths beyond 30 min may not meaningfully reduce the standard deviation values. This research supports previous studies indicating that measured averaged DC offsets should only be used for exclusionary purposes in authenticity analyses and exhibit consistent values when the general acoustic environment and microphone/recorder configurations were held constant. Measured average DC offset values from exemplar recorders may not be directly comparable to those of submitted digital audio recordings without exactly duplicating the acoustic environment and microphone/recorder configurations. © 2014 American Academy of Forensic Sciences.

  15. Mobile Care (Moca) for Remote Diagnosis and Screening

    PubMed Central

    Celi, Leo Anthony; Sarmenta, Luis; Rotberg, Jhonathan; Marcelo, Alvin; Clifford, Gari

    2010-01-01

    Moca is a cell phone-facilitated clinical information system to improve diagnostic, screening and therapeutic capabilities in remote resource-poor settings. The software allows transmission of any medical file, whether a photo, x-ray, audio or video file, through a cell phone to (1) a central server for archiving and incorporation into an electronic medical record (to facilitate longitudinal care, quality control, and data mining), and (2) a remote specialist for real-time decision support (to leverage expertise). The open source software is designed as an end-to-end clinical information system that seamlessly connects health care workers to medical professionals. It is integrated with OpenMRS, an existing open source medical records system commonly used in developing countries. PMID:21822397

  16. Harmonic Characteristics of Rectifier Substations and Their Impact on Audio Frequency Track Circuits

    DOT National Transportation Integrated Search

    1982-05-01

    This report describes the basic operation of substation rectifier equipment and the modes of possible interference with audio frequency track circuits used for train detection, cab signalling, and vehicle speed control. It also includes methods of es...

  17. The MIT Lincoln Laboratory RT-04F Diarization Systems: Applications to Broadcast Audio and Telephone Conversations

    DTIC Science & Technology

    2004-11-01

    this paper we describe the systems developed by MITLL and used in DARPA EARS Rich Transcription Fall 2004 (RT-04F) speaker diarization evaluation...many types of audio sources, the focus if the DARPA EARS project and the NIST Rich Transcription evaluations is primarily speaker diarization ...present or samples of any of the speakers . An overview of the general diarization problem and approaches can be found in [1]. In this paper, we

  18. Multichannel audio monitor for detecting electrical signals.

    PubMed

    Friesen, W O; Stent, G S

    1978-12-01

    The multichannel audio monitor (MUCAM) permits the simultaneous auditory monitoring of concurrent trains of electrical signals generated by as many as eight different sources. The basic working principle of this device is the modulation of the amplitude of a given pure tone by the incoming signals of each input channel. The MUCAM thus converts a complex, multichannel, temporal signal sequence into a musical melody suitable for instant, subliminal pattern analysis by the human ear. Neurophysiological experiments requiring multi-electrode recordings have provided one useful application of the MUCAM.

  19. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  20. DRACULA: Dynamic range control for broadcasting and other applications

    NASA Astrophysics Data System (ADS)

    Gilchrist, N. H. C.

    The BBC has developed a digital processor which is capable of reducing the dynamic range of audio in an unobtrusive manner. It is ideally suited to the task of controlling the level of musical programs. Operating as a self-contained dynamic range controller, the processor is suitable for controlling levels in conventional AM or FM broadcasting, or for applications such as the compression of program material for in-flight entertainment. It can, alternatively, be used to provide a supplementary signal in DAB (digital audio broadcasting) for optional dynamic compression in the receiver.

  1. A realization of sound focused personal audio system using acoustic contrast control.

    PubMed

    Chang, Ji-Ho; Lee, Chan-Hui; Park, Jin-Young; Kim, Yang-Hann

    2009-04-01

    A personal audio system that does not use earphone or any wire would have great interest and potential impact on the audio industries. In this study, a line array speaker system is used to localize sound in the listening zone. The contrast control [Choi, J.-W. and Kim, Y.-H. (2002). J. Acoust. Soc. Am. 111, 1695-1700] is applied, which is a method to make acoustically bright zone around the user and acoustically dark zone in other regions by maximizing the ratio of acoustic potential energy density between the bright and the dark zone. This ratio is regarded as acoustic contrast, analogous with what is used for optical devices. For the evaluation of the performance of acoustic contrast control, experiments are performed and the results are compared with those of uncontrolled case and time reversal array.

  2. Active control of thermoacoustic amplification in a thermo-acousto-electric engine

    NASA Astrophysics Data System (ADS)

    Olivier, Come; Penelet, Guillaume; Poignand, Gaelle; Lotton, Pierrick

    2014-05-01

    In this paper, a new approach is proposed to control the operation of a thermoacoustic Stirling electricity generator. This control basically consists in adding an additional acoustic source to the device, connected through a feedback loop to a reference microphone, a phase-shifter, and an audio amplifier. Experiments are performed to characterize the impact of the feedback loop (and especially that of the controlled phase-shift) on the overall efficiency of the thermal to electric energy conversion performed by the engine. It is demonstrated that this external forcing of thermoacoustic self-sustained oscillations strongly impacts the performance of the engine, and that it is possible under some circumstances to improve the efficiency of the thermo-electric transduction, compared to the one reached without active control. Applicability and further directions of investigation are also discussed.

  3. The sweet-home project: audio technology in smart homes to improve well-being and reliance.

    PubMed

    Vacher, Michel; Istrate, Dan; Portet, François; Joubert, Thierry; Chevalier, Thierry; Smidtas, Serge; Meillon, Brigitte; Lecouteux, Benjamin; Sehili, Mohamed; Chahuara, Pedro; Méniard, Sylvain

    2011-01-01

    The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the multimodal sound corpus acquisition and labelling and on the investigated techniques for speech and sound recognition. The user study and the recognition performances show the interest of this audio technology.

  4. A prospective, randomised, controlled study examining binaural beat audio and pre-operative anxiety in patients undergoing general anaesthesia for day case surgery.

    PubMed

    Padmanabhan, R; Hildreth, A J; Laws, D

    2005-09-01

    Pre-operative anxiety is common and often significant. Ambulatory surgery challenges our pre-operative goal of an anxiety-free patient by requiring people to be 'street ready' within a brief period of time after surgery. Recently, it has been demonstrated that music can be used successfully to relieve patient anxiety before operations, and that audio embedded with tones that create binaural beats within the brain of the listener decreases subjective levels of anxiety in patients with chronic anxiety states. We measured anxiety with the State-Trait Anxiety Inventory questionnaire and compared binaural beat audio (Binaural Group) with an identical soundtrack but without these added tones (Audio Group) and with a third group who received no specific intervention (No Intervention Group). Mean [95% confidence intervals] decreases in anxiety scores were 26.3%[19-33%] in the Binaural Group (p = 0.001 vs. Audio Group, p < 0.0001 vs. No Intervention Group), 11.1%[6-16%] in the Audio Group (p = 0.15 vs. No Intervention Group) and 3.8%[0-7%] in the No Intervention Group. Binaural beat audio has the potential to decrease acute pre-operative anxiety significantly.

  5. The Effect of Visual Cueing and Control Design on Children's Reading Achievement of Audio E-Books with Tablet Computers

    ERIC Educational Resources Information Center

    Wang, Pei-Yu; Huang, Chung-Kai

    2015-01-01

    This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…

  6. NFL Films audio, video, and film production facilities

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.

  7. Multifunction waveform generator for EM receiver testing

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Jin, Sheng; Deng, Ming

    2018-01-01

    In many electromagnetic (EM) methods - such as magnetotelluric, spectral-induced polarization (SIP), time-domain-induced polarization (TDIP), and controlled-source audio magnetotelluric (CSAMT) methods - it is important to evaluate and test the EM receivers during their development stage. To assess the performance of the developed EM receivers, controlled synthetic data that simulate the observed signals in different modes are required. In CSAMT and SIP mode testing, the waveform generator should use the GPS time as the reference for repeating schedule. Based on our testing, the frequency range, frequency precision, and time synchronization of the currently available function waveform generators on the market are deficient. This paper presents a multifunction waveform generator with three waveforms: (1) a wideband, low-noise electromagnetic field signal to be used for magnetotelluric, audio-magnetotelluric, and long-period magnetotelluric studies; (2) a repeating frequency sweep square waveform for CSAMT and SIP studies; and (3) a positive-zero-negative-zero signal that contains primary and secondary fields for TDIP studies. In this paper, we provide the principles of the above three waveforms along with a hardware design for the generator. Furthermore, testing of the EM receiver was conducted with the waveform generator, and the results of the experiment were compared with those calculated from the simulation and theory in the frequency band of interest.

  8. RadioSource.NET: Case-Study of a Collaborative Land-Grant Internet Audio Project.

    ERIC Educational Resources Information Center

    Sohar, Kathleen; Wood, Ashley M.; Ramirez, Roberto

    2002-01-01

    Provides a case study of RadioSource.NET, an Internet broadcasting venture developed collaboratively by land-grant university communication departments to share resources, increase online distribution, and promote access to agricultural and natural and life science research. Describes planning, marketing, and implementation processes. (Contains 18…

  9. Survey on the Sources of Information in Science, Technology and Commerce in the State of Penang, Malaysia

    ERIC Educational Resources Information Center

    Tee, Lim Huck; Fong, Tang Wan

    1973-01-01

    Penang, Malaysia is undergoing rapid industrialization to stimulate its economy. A survey was conducted to determine what technical, scientific, and commercial information sources were available. Areas covered in the survey were library facilities, journals, commercial reference works and audio-visual materials. (DH)

  10. A randomized controlled trial of an audio-based treatment program for child anxiety disorders.

    PubMed

    Infantino, Alyssa; Donovan, Caroline L; March, Sonja

    2016-04-01

    The aim of this study was to investigate the efficacy of an audio-based cognitive-behavioural therapy (CBT) program for child anxiety disorders. Twenty-four children aged 5-11 years were randomly allocated into either the audio-based CBT program condition (Audio, n = 12) or a waitlist control (WL; n = 12) group. Outcome measures included a clinical diagnostic interview, clinician-rated global assessment of functioning, and parent and child self-report ratings of anxiety and internalisation. Assessments were conducted prior to treatment, 12 weeks following treatment, and at 3-month follow-up. Results indicated that at post-assessment, 58.3% of children receiving treatment compared to 16.7% of waitlist children were free of their primary diagnosis, with this figure rising to 66.67% at the 3-month follow-up time point. Additionally, at post-assessment, 25.0% of children in the treatment condition compared to .0% of the waitlist condition were free of all anxiety diagnoses, with this figure rising to 41.67% for the treatment group at 3-month follow-up. Overall, the findings suggest that the audio program tested in this study has the potential to be an efficacious treatment alternative for anxious children. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Online Instructor's Use of Audio Feedback to Increase Social Presence and Student Satisfaction

    ERIC Educational Resources Information Center

    Portolese Dias, Laura; Trumpy, Robert

    2014-01-01

    This study investigates the impact of written group feedback, versus audio feedback, based upon four student satisfaction measures in the online classroom environment. Undergraduate students in the control group were provided both individual written feedback and group written feedback, while undergraduate students in the experimental treatment…

  12. Apollo 11 Mission Audio - Day 1

    NASA Image and Video Library

    1969-07-16

    Audio from mission control during the launch of Apollo 11, which was the United States' first lunar landing mission. While astronauts Armstrong and Aldrin descended in the Lunar Module "Eagle" to explore the Sea of Tranquility region of the moon, astronaut Collins remained with the Command and Service Modules "Columbia" in lunar orbit.

  13. A practical, low-noise coil system for magnetotellurics

    USGS Publications Warehouse

    Stanley, William D.; Tinkler, Richard D.

    1983-01-01

    Magnetotellurics is a geophysical technique which was developed by Cagnaird (1953) and Tikhonov (1950) and later refined by other scientists worldwide. The technique is a method of electromagnetic sounding of the Earth and is based upon the skin depth effect in conductive media. The electric and magnetic fields arising from natural sources are measured at the surface of the earth over broad frequency bands. An excellent review of the technique is provided in the paper by Vozoff (1972). The sources of the natural fields are found in two basic mechanisms. At frequencies above a few hertz, most of the energy arises from lightning in thunderstorm belts around the equatorial regions. This energy is propagated in a wave-guide formed by the earthionospheric cavity. Energy levels are higher at fundamental modes for this cavity, but sufficient energy exists over most of the audio range to be useful for sounding at these frequencies, in which case the technique is generally referred to as audio-magnetotellurics or AMT. At frequencies lower than audio, and in general below 1 Hz, the source of naturally occuring electromagnetic energy is found in ionospheric currents. Current systems flowing in the ionosphere generate EM waves which can be used in sounding of the earth. These fields generate a relatively complete spectrum of electromagnetic energy that extends from around 1 Hz to periods of one day. Figure 1 shows an amplitude spectrum characteristic of both the ionospheric and lightning sources, covering a frequency range from 0.0001 Hz to 1000 Hz. It can be seen that there is a minimum in signal levels that occurs at about 1 Hz, in the gap between the two sources, and that signal level increases with a decrease in frequency.

  14. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    PubMed

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  15. The Sweet-Home project: audio processing and decision making in smart home to improve well-being and reliance.

    PubMed

    Vacher, Michel; Chahuara, Pedro; Lecouteux, Benjamin; Istrate, Dan; Portet, Francois; Joubert, Thierry; Sehili, Mohamed; Meillon, Brigitte; Bonnefond, Nicolas; Fabre, Sébastien; Roux, Camille; Caffiau, Sybille

    2013-01-01

    The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the implemented techniques for speech and sound recognition as context-aware decision making with uncertainty. A user experiment in a smart home demonstrates the interest of this audio-based technology.

  16. Audio signal analysis for tool wear monitoring in sheet metal stamping

    NASA Astrophysics Data System (ADS)

    Ubhayaratne, Indivarie; Pereira, Michael P.; Xiang, Yong; Rolfe, Bernard F.

    2017-02-01

    Stamping tool wear can significantly degrade product quality, and hence, online tool condition monitoring is a timely need in many manufacturing industries. Even though a large amount of research has been conducted employing different sensor signals, there is still an unmet demand for a low-cost easy to set up condition monitoring system. Audio signal analysis is a simple method that has the potential to meet this demand, but has not been previously used for stamping process monitoring. Hence, this paper studies the existence and the significance of the correlation between emitted sound signals and the wear state of sheet metal stamping tools. The corrupting sources generated by the tooling of the stamping press and surrounding machinery have higher amplitudes compared to that of the sound emitted by the stamping operation itself. Therefore, a newly developed semi-blind signal extraction technique was employed as a pre-processing technique to mitigate the contribution of these corrupting sources. The spectral analysis results of the raw and extracted signals demonstrate a significant qualitative relationship between wear progression and the emitted sound signature. This study lays the basis for employing low-cost audio signal analysis in the development of a real-time industrial tool condition monitoring system.

  17. Spatial domain entertainment audio decompression/compression

    NASA Astrophysics Data System (ADS)

    Chan, Y. K.; Tam, Ka Him K.

    2014-02-01

    The ARM7 NEON processor with 128bit SIMD hardware accelerator requires a peak performance of 13.99 Mega Cycles per Second for MP3 stereo entertainment quality decoding. For similar compression bit rate, OGG and AAC is preferred over MP3. The Patent Cooperation Treaty Application dated 28/August/2012 describes an audio decompression scheme producing a sequence of interleaving "min to Max" and "Max to min" rising and falling segments. The number of interior audio samples bound by "min to Max" or "Max to min" can be {0|1|…|N} audio samples. The magnitudes of samples, including the bounding min and Max, are distributed as normalized constants within the 0 and 1 of the bounding magnitudes. The decompressed audio is then a "sequence of static segments" on a frame by frame basis. Some of these frames needed to be post processed to elevate high frequency. The post processing is compression efficiency neutral and the additional decoding complexity is only a small fraction of the overall decoding complexity without the need of extra hardware. Compression efficiency can be speculated as very high as source audio had been decimated and converted to a set of data with only "segment length and corresponding segment magnitude" attributes. The PCT describes how these two attributes are efficiently coded by the PCT innovative coding scheme. The PCT decoding efficiency is obviously very high and decoding latency is basically zero. Both hardware requirement and run time is at least an order of magnitude better than MP3 variants. The side benefit is ultra low power consumption on mobile device. The acid test on how such a simplistic waveform representation can indeed reproduce authentic decompressed quality is benchmarked versus OGG(aoTuv Beta 6.03) by three pair of stereo audio frames and one broadcast like voice audio frame with each frame consisting 2,028 samples at 44,100KHz sampling frequency.

  18. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  19. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  20. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  1. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  2. 33 CFR 127.201 - Sensing and alarm systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...

  3. 33 CFR 127.201 - Sensing and alarm systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...

  4. 33 CFR 127.201 - Sensing and alarm systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...

  5. Audio-vocal responses of vocal fundamental frequency and formant during sustained vowel vocalizations in different noises.

    PubMed

    Lee, Shao-Hsuan; Hsiao, Tzu-Yu; Lee, Guo-She

    2015-06-01

    Sustained vocalizations of vowels [a], [i], and syllable [mə] were collected in twenty normal-hearing individuals. On vocalizations, five conditions of different audio-vocal feedback were introduced separately to the speakers including no masking, wearing supra-aural headphones only, speech-noise masking, high-pass noise masking, and broad-band-noise masking. Power spectral analysis of vocal fundamental frequency (F0) was used to evaluate the modulations of F0 and linear-predictive-coding was used to acquire first two formants. The results showed that while the formant frequencies were not significantly shifted, low-frequency modulations (<3 Hz) of F0 significantly increased with reduced audio-vocal feedback across speech sounds and were significantly correlated with auditory awareness of speakers' own voices. For sustained speech production, the motor speech controls on F0 may depend on a feedback mechanism while articulation should rely more on a feedforward mechanism. Power spectral analysis of F0 might be applied to evaluate audio-vocal control for various hearing and neurological disorders in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex.

    PubMed

    Hage, Steffen R

    2018-03-20

    Monkey vocalization is a complex behavioral pattern, which is flexibly used in audio-vocal communication. A recently proposed dual neural network model suggests that cognitive control might be involved in this behavior, originating from a frontal cortical network in the prefrontal cortex and mediated via projections from the rostral portion of the ventral premotor cortex (PMvr) and motor cortex to the primary vocal motor network in the brainstem. For the rapid adjustment of vocal output to external acoustic events, strong interconnections between vocal motor and auditory sites are needed, which are present at cortical and subcortical levels. However, the role of the PMvr in audio-vocal integration processes remains unclear. In the present study, single neurons in the PMvr were recorded in rhesus monkeys (Macaca mulatta) while volitionally producing vocalizations in a visual detection task or passively listening to monkey vocalizations. Ten percent of randomly selected neurons in the PMvr modulated their discharge rate in response to acoustic stimulation with species-specific calls. More than four-fifths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of the vocalization. Based on these audio-vocal interactions, the PMvr might be well positioned to mediate higher order auditory processing with cognitive control of the vocal motor output to the primary vocal motor network. Such audio-vocal integration processes in the premotor cortex might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8 degrees for speech and less than 4 degrees with a pink noise burst. The results allow for the density of WFS systems to be selected from the required localization accuracy. Also, by exploiting the ventriloquist effect, the angular resolution of an audio rendering may be reduced when combined with spatially-accurate video.

  8. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.

  9. The Role of Corporate and Government Surveillance in Shifting Journalistic Information Security Practices

    ERIC Educational Resources Information Center

    Shelton, Martin L.

    2015-01-01

    Digital technologies have fundamentally altered how journalists communicate with their sources, enabling them to exchange information through social media as well as video, audio, and text chat. Simultaneously, journalists are increasingly concerned with corporate and government surveillance as a threat to their ability to speak with sources in…

  10. Exercise Black Skies 2008: Enhancing Live Training Through Virtual Preparation -- Part Two: An Evaluation of Tools and Techniques

    DTIC Science & Technology

    2009-06-01

    visualisation tool. These tools are currently in use at the Surveillance and Control Training Unit (SACTU) in Williamtown, New South Wales, and the School...itself by facilitating the brevity and sharpness of learning points. The playback of video and audio was considered an extremely useful method of...The task assessor’s comments were supported by wall projections and audio replays of relevant mission segments that were controlled by an AAR

  11. Synchronized personalized music audio-playlists to improve adherence to physical activity among patients participating in a structured exercise program: a proof-of-principle feasibility study.

    PubMed

    Alter, David A; O'Sullivan, Mary; Oh, Paul I; Redelmeier, Donald A; Marzolini, Susan; Liu, Richard; Forhan, Mary; Silver, Michael; Goodman, Jack M; Bartel, Lee R

    2015-01-01

    Preference-based tempo-pace synchronized music has been shown to reduce perceived physical activity exertion and improve exercise performance. The extent to which such strategies can improve adherence to physical activity remains unknown. The objective of the study is to explore the feasibility and efficacy of tempo-pace synchronized preference-based music audio-playlists on adherence to physical activity among cardiovascular disease patients participating in a cardiac rehabilitation. Thirty-four cardiac rehabilitation patients were randomly allocated to one of two strategies: (1) no music usual-care control and (2) tempo-pace synchronized audio-devices with personalized music playlists + usual-care. All songs uploaded onto audio-playlist devices took into account patient personal music genre and artist preferences. However, actual song selection was restricted to music whose tempos approximated patients' prescribed exercise walking/running pace (steps per minute) to achieve tempo-pace synchrony. Patients allocated to audio-music playlists underwent further randomization in which half of the patients received songs that were sonically enhanced with rhythmic auditory stimulation (RAS) to accentuate tempo-pace synchrony, whereas the other half did not. RAS was achieved through blinded rhythmic sonic-enhancements undertaken manually to songs within individuals' music playlists. The primary outcome consisted of the weekly volume of physical activity undertaken over 3 months as determined by tri-axial accelerometers. Statistical methods employed an intention to treat and repeated-measures design. Patients randomized to personalized audio-playlists with tempo-pace synchrony achieved higher weekly volumes of physical activity than did their non-music usual-care comparators (475.6 min vs. 370.2 min, P  < 0.001). Improvements in weekly physical activity volumes among audio-playlist recipients were driven by those randomized to the RAS group which attained weekly exercise volumes that were nearly twofold greater than either of the two other groups (average weekly minutes of physical activity of 631.3 min vs. 320 min vs. 370.2 min, personalized audio-playlists with RAS vs. personalized audio-playlists without RAS vs. non-music usual-care controls, respectively, P  < 0.001). Patients randomized to music with RAS utilized their audio-playlist devices more frequently than did non-RAS music counterparts ( P  < 0.001). The use of tempo-pace synchronized preference-based audio-playlists was feasibly implemented into a structured exercise program and efficacious in improving adherence to physical activity beyond the evidence-based non-music usual standard of care. Larger clinical trials are required to validate these findings. ClinicalTrials.gov ID (NCT01752595).

  12. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  13. Audio-visual speech cue combination.

    PubMed

    Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick

    2010-04-16

    Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

  14. The Effect of Gloss Type and Mode on Iranian EFL Learners' Reading Comprehension

    ERIC Educational Resources Information Center

    Sadeghi, Karim; Ahmadi, Negar

    2012-01-01

    This study investigated the effects of three kinds of gloss conditions that is traditional non-CALL marginal gloss, computer-based audio gloss, and computer-based extended audio gloss, on reading comprehension of Iranian EFL learners. To this end, three experimental and one control groups, each comprising 15 participants, took part in this study.…

  15. Development and Exchange of Instructional Resources in Water Quality Control Programs, III: Selecting Audio-Visual Equipment.

    ERIC Educational Resources Information Center

    Moon, Donald K.

    This document is one in a series of reports which reviews instructional materials and equipment and offers suggestions about how to select equipment. Topics discussed include: (1) the general criteria for audio-visual equipment selection such as performance, safety, comparability, sturdiness and repairability; and (2) specific equipment criteria…

  16. Investigating the Effectiveness of Audio Input Enhancement on EFL Learners' Retention of Intensifiers

    ERIC Educational Resources Information Center

    Negari, Giti Mousapour; Azizi, Aliye; Arani, Davood Khedmatkar

    2018-01-01

    The present study attempted to investigate the effects of audio input enhancement on EFL learners' retention of intensifiers. To this end, two research questions were formulated. In order to address these research questions, this study attempted to reject two null hypotheses. Pretest-posttest control group quasi-experimental design was employed to…

  17. An Experimental Evaluation of the Effectiveness of an Audio-Tutorial Method in Teaching Vocational Agriculture.

    ERIC Educational Resources Information Center

    McVey, Gary C.

    To determine the effectiveness of an audio-tutorial technique in vocational agriculture, six treatment schools and six control schools were randomly selected from 48 Iowa high schools qualifying for participation in the study. While each school was provided the same reference material and teaching outline for the 14-day experimental period, the…

  18. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  19. Ultrasonic Leak Detection System

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C. (Inventor); Moerk, J. Steven (Inventor)

    1998-01-01

    A system for detecting ultrasonic vibrations. such as those generated by a small leak in a pressurized container. vessel. pipe. or the like. comprises an ultrasonic transducer assembly and a processing circuit for converting transducer signals into an audio frequency range signal. The audio frequency range signal can be used to drive a pair of headphones worn by an operator. A diode rectifier based mixing circuit provides a simple, inexpensive way to mix the transducer signal with a square wave signal generated by an oscillator, and thereby generate the audio frequency signal. The sensitivity of the system is greatly increased through proper selection and matching of the system components. and the use of noise rejection filters and elements. In addition, a parabolic collecting horn is preferably employed which is mounted on the transducer assembly housing. The collecting horn increases sensitivity of the system by amplifying the received signals. and provides directionality which facilitates easier location of an ultrasonic vibration source.

  20. Music Identification System Using MPEG-7 Audio Signature Descriptors

    PubMed Central

    You, Shingchern D.; Chen, Wei-Hwa; Chen, Woei-Kae

    2013-01-01

    This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query) audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system's database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control. PMID:23533359

  1. The enhancement of beneficial effects following audio feedback by cognitive preparation in the treatment of social anxiety: a single-session experiment.

    PubMed

    Nilsson, Jan-Erik; Lundh, Lars-Gunnar; Faghihi, Shahriar; Roth-Andersson, Gun

    2011-12-01

    According to cognitive models, negatively biased processing of the publicly observable self is an important aspect of social phobia; if this is true, effective methods for producing corrective feedback concerning the public self should be strived for. Video feedback is proven effective, but since one's voice represents another aspect of the self, audio feedback should produce equivalent results. This is the first study to assess the enhancement of audio feedback by cognitive preparation in a single-session randomized controlled experiment. Forty socially anxious participants were asked to give a speech, then to listen to and evaluate a taped recording of their performance. Half of the sample was given cognitive preparation prior to the audio feedback and the remainder received audio feedback only. Cognitive preparation involved asking participants to (1) predict in detail what they would hear on the audiotape, (2) form an image of themselves giving the speech and (3) listen to the audio recording as though they were listening to a stranger. To assess generalization effects all participants were asked to give a second speech. Audio feedback with cognitive preparation was shown to produce less negative ratings after the first speech, and effects generalized to the evaluation of the second speech. More positive speech evaluations were associated with corresponding reductions of state anxiety. Social anxiety as indexed by the Implicit Association Test was reduced in participants given cognitive preparation. Small sample size; analogue study. Audio feedback with cognitive preparation may be utilized as a treatment intervention for social phobia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Audio Visual Instructional Materials for Distributive Education; a Classified Bibliography. Final Report.

    ERIC Educational Resources Information Center

    Levendowski, Jerry C.

    The bibliography contains a list of 90 names and addresses of sources of audiovisual instructional materials. For each title a brief description of content, the source, purchase price, rental fee or free use for 16MM films, sound-slidefilms, tapes-records, and transparencies is given. Materials are listed separately by topics: (1) advertising and…

  3. Strike up Student Interest through Song: Technology and Westward Expansion

    ERIC Educational Resources Information Center

    Steele, Meg

    2014-01-01

    Sheet music, song lyrics, and audio recordings may not be the first primary sources that come to mind when considering ways to teach about changes brought about by technology during westward expansion, but these sources engage students in thought provoking ways. In this article the author presents a 1917 photograph of Mountain Chief, of the Piegan…

  4. Electromagnetic receiver with capacitive electrodes and triaxial induction coil for tunnel exploration

    NASA Astrophysics Data System (ADS)

    Kai, Chen; Sheng, Jin; Wang, Shun

    2017-09-01

    A new type of electromagnetic (EM) receiver has been developed by integrating four capacitive electrodes and a triaxial induction coil with an advanced data logger for tunnel exploration. The new EM receiver can conduct EM observations in tunnels, which is one of the principal goals of surface-tunnel-borehole EM detection for deep ore deposit mapping. The use of capacitive electrodes enables us to record the electrical field (E-field) signals from hard rock surfaces, which are high-resistance terrains. A compact triaxial induction coil integrates three independent induction coils for narrow-tunnel exploration applications. A low-time-drift-error clock source is developed for tunnel applications where GPS signals are unavailable. The three main components of our tunnel EM receiver are: (1) four capacitive electrodes for measuring the E-field signal without digging in hard rock regions; (2) a triaxial induction coil sensor for audio-frequency magnetotelluric and controlled-source audio-frequency magnetotelluric signal measurements; and (3) a data logger that allows us to record five-component MT signals with low noise levels, low time-drift-error for the clock source, and high dynamic range. The proposed tunnel EM receiver was successfully deployed in a mine that exhibited with typical noise characteristics. [Figure not available: see fulltext. Caption: The new EM receiver can conduct EM observations in tunnels, which is one of the principal goals of the surface-tunnel-borehole EM (STBEM) detection for deep ore deposit mapping. The use of a capacitive electrode enables us to record the electrical field (E-field) signals from hard rock surfaces. A compact triaxial induction coil integrated three induction coils, for narrow-tunnel applications.

  5. A real-time detector system for precise timing of audiovisual stimuli.

    PubMed

    Henelius, Andreas; Jagadeesan, Sharman; Huotilainen, Minna

    2012-01-01

    The successful recording of neurophysiologic signals, such as event-related potentials (ERPs) or event-related magnetic fields (ERFs), relies on precise information of stimulus presentation times. We have developed an accurate and flexible audiovisual sensor solution operating in real-time for on-line use in both auditory and visual ERP and ERF paradigms. The sensor functions independently of the used audio or video stimulus presentation tools or signal acquisition system. The sensor solution consists of two independent sensors; one for sound and one for light. The microcontroller-based audio sensor incorporates a novel approach to the detection of natural sounds such as multipart audio stimuli, using an adjustable dead time. This aids in producing exact markers for complex auditory stimuli and reduces the number of false detections. The analog photosensor circuit detects changes in light intensity on the screen and produces a marker for changes exceeding a threshold. The microcontroller software for the audio sensor is free and open source, allowing other researchers to customise the sensor for use in specific auditory ERP/ERF paradigms. The hardware schematics and software for the audiovisual sensor are freely available from the webpage of the authors' lab.

  6. How Much Videos Win over Audios in Listening Instruction for EFL Learners

    ERIC Educational Resources Information Center

    Yasin, Burhanuddin; Mustafa, Faisal; Permatasari, Rizki

    2017-01-01

    This study aims at comparing the benefits of using videos instead of audios for improving students' listening skills. This experimental study used a pre-test and post-test control group design. The sample, selected by cluster random sampling resulted in the selection of 32 second year high school students for each group. The instruments used were…

  7. Toe pressure determination by audiophotoplethysmography.

    PubMed

    Fronek, A; Blazek, V; Curran, B

    1994-08-01

    The purpose of this study was to evaluate the performance of audiophotoplethysmography as a modality to measure toe pressure without the requirement of a recorder. A portable photoplethysmograph with an audio output was used to determine toe pressures, and the results were compared with those obtained by a commercial photoplethysmograph with a recorder. Thirty-one measurements in control subjects and 62 measurements in patients with arterial occlusive disease were performed. The average toe pressure recorded with oscillography with standard photoplethysmography was 103.5 mm Hg +/- 14.7 SD and 95.9 mm Hg +/- 13.4 SD with audio-photoplethysmography. In the patient group the pressure recorded with a commercial photoplethysmograph was 65.3 mm Hg +/- 34.9 SD compared with 61.6 mm Hg +/- 34.8 SD obtained with audio-photoplethysmography. The difference in both groups was insignificant, and the correlation between both methods was very good. A portable hand-held photoplethysmograph equipped with an audio output was used to measure toe pressure in control subjects and in patients with arterial occlusive disease. The results have been compared with the oscillometric method by a standard commercial photoplethysmograph connected to a recorder. The correlation was very good in the control and patient groups, and the difference between both methods was below the level of statistical significance. The fact that no recorder is needed may help in introducing toe pressure measurement into everyday office diagnostic practice.

  8. Evaluation of a multi-methods approach to the collection and dissemination of feedback on OSCE performance in dental education.

    PubMed

    Wardman, M J; Yorke, V C; Hallam, J L

    2018-05-01

    Feedback is an essential part of the learning process, and students expect their feedback to be personalised, meaningful and timely. Objective Structured Clinical Examination (OSCE) assessments allow examiners to observe students carefully over the course of a number of varied station types, across a number of clinical knowledge and skill domains. They therefore present an ideal opportunity to record detailed feedback which allows students to reflect on and improve their performance. This article outlines two methods by which OSCE feedback was collected and then disseminated to undergraduate dental students across 2-year groups in a UK dental school: (i) Individual written feedback comments made by examiners during the examination, (ii) General audio feedback recorded by groups of examiners immediately following the examination. Evaluation of the feedback was sought from students and staff examiners. A multi-methods approach utilising Likert questionnaire items (quantitative) and open-ended feedback questions (qualitative) was used. Data analysis explored student and staff perceptions of the audio and written feedback. A total of 131 students (response rate 68%) and 52 staff examiners (response rate 83%) completed questionnaires. Quantitative data analysis showed that the written and audio formats were reported as a meaningful source of feedback for learning by both students (93% written, 89% audio) and staff (96% written, 92% audio). Qualitative data revealed the complementary nature of both types of feedback. Written feedback gives specific, individual information whilst audio shares general observations and allows students to learn from others. The advantages, limitations and challenges of the feedback methods are discussed, leading to the development of an informed set of implementation guidelines. Written and audio feedback methods are valued by students and staff. It is proposed that these may be very easily applied to OSCEs running in other dental schools. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Active noise control for infant incubators.

    PubMed

    Yu, Xun; Gujjula, Shruthi; Kuo, Sen M

    2009-01-01

    This paper presents an active noise control system for infant incubators. Experimental results show that global noise reduction can be achieved for infant incubator ANC systems. An audio-integration algorithm is presented to introduce a healthy audio (intrauterine) sound with the ANC system to mask the residual noise and soothe the infant. Carbon nanotube based transparent thin film speaker is also introduced in this paper as the actuator for the ANC system to generate the destructive secondary sound, which can significantly save the congested incubator space and without blocking the view of doctors and nurses.

  10. Impact of Audio-Coaching on the Position of Lung Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haasbeek, Cornelis J.A.; Spoelstra, Femke; Lagerwaard, Frank J.

    2008-07-15

    Purpose: Respiration-induced organ motion is a major source of positional, or geometric, uncertainty in thoracic radiotherapy. Interventions to mitigate the impact of motion include audio-coached respiration-gated radiotherapy (RGRT). To assess the impact of coaching on average tumor position during gating, we analyzed four-dimensional computed tomography (4DCT) scans performed both with and without audio-coaching. Methods and Materials: Our RGRT protocol requires that an audio-coached 4DCT scan is performed when the initial free-breathing 4DCT indicates a potential benefit with gating. We retrospectively analyzed 22 such paired scans in patients with well-circumscribed tumors. Changes in lung volume and position of internal target volumesmore » (ITV) generated in three consecutive respiratory phases at both end-inspiration and end-expiration were analyzed. Results: Audio-coaching increased end-inspiration lung volumes by a mean of 10.2% (range, -13% to +43%) when compared with free breathing (p = 0.001). The mean three-dimensional displacement of the center of ITV was 3.6 mm (SD, 2.5; range, 0.3-9.6mm), mainly caused by displacement in the craniocaudal direction. Displacement of ITV caused by coaching was more than 5 mm in 5 patients, all of whom were in the subgroup of 9 patients showing total tumor motion of 10 mm or more during both coached and uncoached breathing. Comparable ITV displacements were observed at end-expiration phases of the 4DCT. Conclusions: Differences in ITV position exceeding 5 mm between coached and uncoached 4DCT scans were detected in up to 56% of mobile tumors. Both end-inspiration and end-expiration RGRT were susceptible to displacements. This indicates that the method of audio-coaching should remain unchanged throughout the course of treatment.« less

  11. A two-stage approach to removing noise from recorded music

    NASA Astrophysics Data System (ADS)

    Berger, Jonathan; Goldberg, Maxim J.; Coifman, Ronald C.; Goldberg, Maxim J.; Coifman, Ronald C.

    2004-05-01

    A two-stage algorithm for removing noise from recorded music signals (first proposed in Berger et al., ICMC, 1995) is described and updated. The first stage selects the ``best'' local trigonometric basis for the signal and models noise as the part having high entropy [see Berger et al., J. Audio Eng. Soc. 42(10), 808-818 (1994)]. In the second stage, the original source and the model of the noise obtained from the first stage are expanded into dyadic trees of smooth local sine bases. The best basis for the source signal is extracted using a relative entropy function (the Kullback-Leibler distance) to compare the sum of the costs of the children nodes to the cost of their parent node; energies of the noise in corresponding nodes of the model noise tree are used as weights. The talk will include audio examples of various stages of the method and proposals for further research.

  12. Sound Fields in Complex Listening Environments

    PubMed Central

    2011-01-01

    The conditions of sound fields used in research, especially testing and fitting of hearing aids, are usually simplified or reduced to fundamental physical fields, such as the free or the diffuse sound field. The concepts of such ideal conditions are easily introduced in theoretical and experimental investigations and in models for directional microphones, for example. When it comes to real-world application of hearing aids, however, the field conditions are more complex with regard to specific stationary and transient properties in room transfer functions and the corresponding impulse responses and binaural parameters. Sound fields can be categorized in outdoor rural and urban and indoor environments. Furthermore, sound fields in closed spaces of various sizes and shapes and in situations of transport in vehicles, trains, and aircrafts are compared with regard to the binaural signals. In laboratory tests, sources of uncertainties are individual differences in binaural cues and too less controlled sound field conditions. Furthermore, laboratory sound fields do not cover the variety of complex sound environments. Spatial audio formats such as higher-order ambisonics are candidates for sound field references not only in room acoustics and audio engineering but also in audiology. PMID:21676999

  13. CSAMT Data Processing with Source Effect and Static Corrections, Application of Occam's Inversion, and Its Application in Geothermal System

    NASA Astrophysics Data System (ADS)

    Hamdi, H.; Qausar, A. M.; Srigutomo, W.

    2016-08-01

    Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.

  14. A Comparison of Two Teaching Methodologies for a Course in Basic Reference. Final Report.

    ERIC Educational Resources Information Center

    Gothberg, Helen M.

    The purpose of the investigation was to develop and test an audio-tutorial program for a course in Basic Reference. The design of the investigation was a posttest-only-control group design with 63 students randomly assigned to either an audio-tutorial or a lecture group. Data were collected and analyzed using a t-test for two groups and four…

  15. Processable English: The Theory Behind the PENG System

    DTIC Science & Technology

    2009-06-01

    implicit - is often buried amongst masses of irrelevant data. Heralding from unstructured sources such as natural language documents, email, audio ...estimation and prediction, data-mining, social network analysis, and semantic search and visualisation . This report describes the theoretical

  16. Comparing Economic Systems.

    ERIC Educational Resources Information Center

    Wolken, Lawrence C.

    1984-01-01

    Defines the predominate classifications of economic systems: traditional, command, market, capitalism, socialism, and communism. Considers property rights, role of government, economic freedom, incentives, market structure, economic goals and means of achieving those goals for each classification. Identifies 26 print and audio-visual sources for…

  17. NFL Films music scoring stage and control room space

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    NFL Films' new 200,000 sq. ft. corporate headquarters is home to an orchestral scoring stage used to record custom music scores to support and enhance their video productions. Part of the 90,000 sq. ft. of sound critical technical space, the music scoring stage and its associated control room are at the heart of the audio facilities. Driving the design were the owner's mandate for natural light, wood textures, and an acoustical environment that would support small rhythm sections, soloists, and a full orchestra. Being an industry leader in cutting-edge video and audio formats, the NFLF required that the technical spaces allow the latest in technology to be continually integrated into the infrastructure. Never was it more important for a project to hold true to the adage of ``designing from the inside out.'' Each audio and video space within the facility had to stand on its own with regard to user functionality, acoustical accuracy, sound isolation, noise control, and monitor presentation. A detailed look at the architectural and acoustical design challenges encountered and the solutions developed for the performance studio and the associated control room space will be discussed.

  18. Audio-visual presentation of information for informed consent for participation in clinical trials.

    PubMed

    Ryan, R E; Prictor, M J; McLaughlin, K J; Hill, S J

    2008-01-23

    Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented for example on the Internet, DVD, or video cassette) are one such method. To assess the effects of providing audio-visual information alone, or in conjunction with standard forms of information provision, to potential clinical trial participants in the informed consent process, in terms of their satisfaction, understanding and recall of information about the study, level of anxiety and their decision whether or not to participate. We searched: the Cochrane Consumers and Communication Review Group Specialised Register (searched 20 June 2006); the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 2, 2006; MEDLINE (Ovid) (1966 to June week 1 2006); EMBASE (Ovid) (1988 to 2006 week 24); and other databases. We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. Randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or oral information as usually employed in the particular service setting), with standard forms of information provision alone, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to participate in a real (not hypothetical) clinical study. Two authors independently assessed studies for inclusion and extracted data. Due to heterogeneity no meta-analysis was possible; we present the findings in a narrative review. We included 4 trials involving data from 511 people. Studies were set in the USA and Canada. Three were randomised controlled trials (RCTs) and the fourth a quasi-randomised trial. Their quality was mixed and results should be interpreted with caution. Considerable uncertainty remains about the effects of audio-visual interventions, compared with standard forms of information provision (such as written or oral information normally used in the particular setting), for use in the process of obtaining informed consent for clinical trials. Audio-visual interventions did not consistently increase participants' levels of knowledge/understanding (assessed in four studies), although one study showed better retention of knowledge amongst intervention recipients. An audio-visual intervention may transiently increase people's willingness to participate in trials (one study), but this was not sustained at two to four weeks post-intervention. Perceived worth of the trial did not appear to be influenced by an audio-visual intervention (one study), but another study suggested that the quality of information disclosed may be enhanced by an audio-visual intervention. Many relevant outcomes including harms were not measured. The heterogeneity in results may reflect the differences in intervention design, content and delivery, the populations studied and the diverse methods of outcome assessment in included studies. The value of audio-visual interventions for people considering participating in clinical trials remains unclear. Evidence is mixed as to whether audio-visual interventions enhance people's knowledge of the trial they are considering entering, and/or the health condition the trial is designed to address; one study showed improved retention of knowledge amongst intervention recipients. The intervention may also have small positive effects on the quality of information disclosed, and may increase willingness to participate in the short-term; however the evidence is weak. There were no data for several primary outcomes, including harms. In the absence of clear results, triallists should continue to explore innovative methods of providing information to potential trial participants. Further research should take the form of high-quality randomised controlled trials, with clear reporting of methods. Studies should conduct content assessment of audio-visual and other innovative interventions for people of differing levels of understanding and education; also for different age and cultural groups. Researchers should assess systematically the effects of different intervention components and delivery characteristics, and should involve consumers in intervention development. Studies should assess additional outcomes relevant to individuals' decisional capacity, using validated tools, including satisfaction; anxiety; and adherence to the subsequent trial protocol.

  19. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  20. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    PubMed

    Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I

    2017-06-01

    The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.

  1. "Travelers In The Night" in the Old and New Media

    NASA Astrophysics Data System (ADS)

    Grauer, Albert D.

    2015-11-01

    "Travelers in the Night" is a series of 2 minute audio programs based on current research in astronomy and the space sciences.After more than a year of submitting “Travelers In The Night” 2 minute audio pieces to NPR and Community Radio stations with limited success, a parallel effort was initiated by posting the pieces as audio podcasts on Spreaker.com and iTunes.The classic media dispenses programming whose content and schedule is determined by editors and station managers. Riding the wave of new technology, people from every demographic group across the globe are selecting what, when, and how they receive information and entertainment. This change is significant with the Pew Research Center reporting that currently more than 60% of Facebook and Twitter users now get their news and/or links to stories from these sources. What remains constant is the public’s interest in astronomy and space.This poster presents relevant statistics and a discussion of the initial results of these two parallel efforts.

  2. Real time simulation using position sensing

    NASA Technical Reports Server (NTRS)

    Isbell, William B. (Inventor); Taylor, Jason A. (Inventor); Studor, George F. (Inventor); Womack, Robert W. (Inventor); Hilferty, Michael F. (Inventor); Bacon, Bruce R. (Inventor)

    2000-01-01

    An interactive exercise system including exercise equipment having a resistance system, a speed sensor, a controller that varies the resistance setting of the exercise equipment, and a playback device for playing pre-recorded video and audio. The controller, operating in conjunction with speed information from the speed sensor and terrain information from media table files, dynamically varies the resistance setting of the exercise equipment in order to simulate varying degrees of difficulty while the playback device concurrently plays back the video and audio to create the simulation that the user is exercising in a natural setting such as a real-world exercise course.

  3. Memory of a Nation: Effectively Using Artworks to Teach about the Assassination of President John F. Kennedy

    ERIC Educational Resources Information Center

    Eder, Elizabeth K.

    2011-01-01

    Artists today draw on a range of sources--newspapers, magazines, photographs, film, audio, and of course the Internet--to create artworks that serve as visual "texts" of a specific place and moment in time. Using artworks as sources and understanding how to decode them in the service of "drilling down" into difficult topics can create powerful…

  4. Processes in Increasing Participation of African American Women in Cancer Prevention Trials: Development and Pretesting of an Audio-Card.

    PubMed

    Kenerson, Donna; Fadeyi, Saudat; Liu, Jianguo; Weriwoh, Mirabel; Beard, Katina; Hargreaves, Margaret K

    2017-12-01

    The enrollment of African American women into cancer prevention trials (CPTs) continues to be low despite their higher cancer mortality rates. Clinical trials are vital to the discovery of new prevention, diagnostic, and treatment methods that improve cancer outcomes. This study addressed attitudes and beliefs associated with the sub optimal participation of African American women in CPTs through the development and pretesting of an educational tool. The use of community-engaged research (CER) in the formative phase of this study was the basis for developing an audio-card. Cultural and linguistic elements were incorporated into the tool's audio and written messages, and visual images highlighted the importance of CPT participation among African American women. The CPT beliefs and behavioral intent of 30 African American women who received information from the audio-card were compared with 30 controls. Findings indicated statistically significant differences at posttest between the control and treatment groups in personal value (p = .03), social influence (p = .03), and personal barriers (p = .0001); personal barriers in the pretest group also demonstrated significant differences (p = .009). Consideration of cultural context and language needs of populations are vital to the development and design of effective health promoting tools.

  5. Advances in Audio-Based Systems to Monitor Patient Adherence and Inhaler Drug Delivery.

    PubMed

    Taylor, Terence E; Zigel, Yaniv; De Looze, Céline; Sulaiman, Imran; Costello, Richard W; Reilly, Richard B

    2018-03-01

    Hundreds of millions of people worldwide have asthma and COPD. Current medications to control these chronic respiratory diseases can be administered using inhaler devices, such as the pressurized metered dose inhaler and the dry powder inhaler. Provided that they are used as prescribed, inhalers can improve patient clinical outcomes and quality of life. Poor patient inhaler adherence (both time of use and user technique) is, however, a major clinical concern and is associated with poor disease control, increased hospital admissions, and increased mortality rates, particularly in low- and middle-income countries. There are currently limited methods available to health-care professionals to objectively and remotely monitor patient inhaler adherence. This review describes recent sensor-based technologies that use audio-based approaches that show promising opportunities for monitoring inhaler adherence in clinical practice. This review discusses how one form of sensor-based technology, audio-based monitoring systems, can provide clinically pertinent information regarding patient inhaler use over the course of treatment. Audio-based monitoring can provide health-care professionals with quantitative measurements of the drug delivery of inhalers, signifying a clear clinical advantage over other methods of assessment. Furthermore, objective audio-based adherence measures can improve the predictability of patient outcomes to treatment compared with current standard methods of adherence assessment used in clinical practice. Objective feedback on patient inhaler adherence can be used to personalize treatment to the patient, which may enhance precision medicine in the treatment of chronic respiratory diseases. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  6. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  7. Apparatus for providing sensory substitution of force feedback

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J. (Inventor); Sheridan, Thomas B. (Inventor)

    1995-01-01

    A feedback apparatus for an operator to control an effector that is remote from the operator to interact with a remote environment has a local input device to be manipulated by the operator. Sensors in the effector's environment are capable of sensing the amplitude of forces arising between the effector and its environment, the direction of application of such forces, or both amplitude and direction. A feedback signal corresponding to such a component of the force, is generated and transmitted to the environment of the operator. The signal is transduced into an auditory sensory substitution signal to which the operator is sensitive. Sound production apparatus present the auditory signal to the operator. The full range of the force amplitude may be represented by a single, audio speaker. Auditory display elements may be stereo headphones or free standing audio speakers, numbering from one to many more than two. The location of the application of the force may also be specified by the location of audio speakers that generate signals corresponding to specific forces. Alternatively, the location may be specified by the frequency of an audio signal, or by the apparent location of an audio signal, as simulated by a combination of signals originating at different locations.

  8. Detection of goal events in soccer videos

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  9. The HomePlanet project: a HAVi multi-media network over POF

    NASA Astrophysics Data System (ADS)

    Roycroft, Brendan; Corbett, Brian; Kelleher, Carmel; Lambkin, John; Bareel, Baudouin; Goudeau, Jacques; Skiczuk, Peter

    2005-06-01

    This project has developed a low cost in-home network compatible with network standard IEEE1394b. We have developed all components of the network, from the red resonant cavity LEDs and VCSELs as light sources, the driver circuitry, plastic optical fibres for transmission, up to the network management software. We demonstrate plug-and-play operation of S100 and S200 (125 and 250Mbps) data streams using 650nm RCLEDs, and S400 (500 Mbps) data streams using VCSELs. The network software incorporates Home Audio Video interoperability (HAVi), which allows any HAVi device to be hot-plugged into the network and be instantly recognised and controllable over the network.

  10. Development of a directivity controlled piezoelectric transducer for sound reproduction

    NASA Astrophysics Data System (ADS)

    Bédard, Magella; Berry, Alain

    2005-04-01

    One of the inherent limitations of loudspeaker systems in audio reproduction is their inability to reproduce the possibly complex acoustic directivity patterns of real sound sources. For music reproduction for example, it may be desirable to separate diffuse field and direct sound components and project them with different directivity patterns. Because of their properties, poly (vinylidene fluoride) (PVDF) films offer lot of advantages for the development of electroacoustic transducers. A system of piezoelectric transducers made with PVDF that show a controllable directivity was developed. A cylindrical omnidirectional piezoelectric transducer is used to produce an ambient field, and a piezoelectric transducers system, consisting of a series of curved sources placed around a cylinder frame, is used to produce a sound field with a given directivity. To develop the system, a numerical model was generated with ANSYS Multiphysics TM8.1 and used to calculate the mechanical response of the piezoelectric transducer. The acoustic radiation of the driver was then computed using the Kirchoff-Helmoltz theorem. Numerical and experimental results of the mechanical and acoustical response of the system will be shown.

  11. Original sound compositions reduce anxiety in emergency department patients: a randomised controlled trial.

    PubMed

    Weiland, Tracey J; Jelinek, George A; Macarow, Keely E; Samartzis, Philip; Brown, David M; Grierson, Elizabeth M; Winter, Craig

    2011-12-19

    To determine whether emergency department (ED) patients' self-rated levels of anxiety are affected by exposure to purpose-designed music or sound compositions with and without the audio frequencies of embedded binaural beat. Randomised controlled trial in an ED between 1 February 2010 and 14 April 2010 among a convenience sample of adult patients who were rated as category 3 on the Australasian Triage Scale. All interventions involved listening to soundtracks of 20 minutes' duration that were purpose-designed by composers and sound-recording artists. Participants were allocated at random to one of five groups: headphones and iPod only, no soundtrack (control group); reconstructed ambient noise simulating an ED but free of clear verbalisations; electroacoustic musical composition; composed non-musical soundtracks derived from audio field recordings obtained from natural and constructed settings; sound composition of audio field recordings with embedded binaural beat. All soundtracks were presented on an iPod through headphones. Patients and researchers were blinded to allocation until interventions were administered. State-trait anxiety was self-assessed before the intervention and state anxiety was self-assessed again 20 minutes after the provision of the soundtrack. Spielberger State-Trait Anxiety Inventory. Of 291 patients assessed for eligibility, 170 patients completed the pre-intervention anxiety self-assessment and 169 completed the post-intervention assessment. Significant decreases (all P < 0.001) in anxiety level were observed among patients exposed to the electroacoustic musical composition (pre-intervention mean, 39; post-intervention mean, 34), audio field recordings (42; 35) or audio field recordings with embedded bianaural beats (43; 37) when compared with those allocated to receive simulated ED ambient noise (40; 41) or headphones only (44; 44). In moderately anxious ED patients, state anxiety was reduced by 10%-15% following exposure to purpose-designed sound interventions. Australian New Zealand Clinical Trials Registry ACTRN 12608000444381.

  12. Appendix.

    ERIC Educational Resources Information Center

    Naturescope, 1987

    1987-01-01

    Contains a glossary of terms related to endangered species and lists reference books, children's books, audio-visual materials, software, and activity sources on the topics. Also identifies wildlife laws and explains what they mean. An index of issues of "Ranger Rick," which includes articles on endangered species, is included. (ML)

  13. FastStats: Kidney Disease

    MedlinePlus

    ... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file RIS file Page last reviewed: February 18, 2013 Page last updated: March 30, 2017 Content source: ...

  14. Visual Aids Reviews.

    ERIC Educational Resources Information Center

    School Science Review, 1983

    1983-01-01

    Provided are reviews of science films, slides, audio cassettes, and wall charts. Each review includes title, source, country of origin, description of subject matter presented, appraisal, and target audience. Among the topics considered are smell/taste, grasshopper behavior, photography, bat behavior/flight, pond life, exploring planets, locusts,…

  15. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  16. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  17. Comprehension and Motivation Levels in Conjunction with the Use of eBooks with Audio: A Quasi-Experimental Study of Post-Secondary Remedial Reading Students

    ERIC Educational Resources Information Center

    Wheeler, Kimberly W.

    2014-01-01

    This quasi-experimental pretest, posttest nonequivalent control group study investigated the comprehension scores and motivation levels of post-secondary remedial reading students in a two-year technical college in Northwest Georgia using an eBook, an eBook with audio, and a print book. After reading a module on Purpose and Tone in the three book…

  18. Subjective evaluation and electroacoustic theoretical validation of a new approach to audio upmixing

    NASA Astrophysics Data System (ADS)

    Usher, John S.

    Audio signal processing systems for converting two-channel (stereo) recordings to four or five channels are increasingly relevant. These audio upmixers can be used with conventional stereo sound recordings and reproduced with multichannel home theatre or automotive loudspeaker audio systems to create a more engaging and natural-sounding listening experience. This dissertation discusses existing approaches to audio upmixing for recordings of musical performances and presents specific design criteria for a system to enhance spatial sound quality. A new upmixing system is proposed and evaluated according to these criteria and a theoretical model for its behavior is validated using empirical measurements. The new system removes short-term correlated components from two electronic audio signals using a pair of adaptive filters, updated according to a frequency domain implementation of the normalized-least-means-square algorithm. The major difference of the new system with all extant audio upmixers is that unsupervised time-alignment of the input signals (typically, by up to +/-10 ms) as a function of frequency (typically, using a 1024-band equalizer) is accomplished due to the non-minimum phase adaptive filter. Two new signals are created from the weighted difference of the inputs, and are then radiated with two loudspeakers behind the listener. According to the consensus in the literature on the effect of interaural correlation on auditory image formation, the self-orthogonalizing properties of the algorithm ensure minimal distortion of the frontal source imagery and natural-sounding, enveloping reverberance (ambiance) imagery. Performance evaluation of the new upmix system was accomplished in two ways: Firstly, using empirical electroacoustic measurements which validate a theoretical model of the system; and secondly, with formal listening tests which investigated auditory spatial imagery with a graphical mapping tool and a preference experiment. Both electroacoustic and subjective methods investigated system performance with a variety of test stimuli for solo musical performances reproduced using a loudspeaker in an orchestral concert-hall and recorded using different microphone techniques. The objective and subjective evaluations combined with a comparative study with two commercial systems demonstrate that the proposed system provides a new, computationally practical, high sound quality solution to upmixing.

  19. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  20. Quality of audio-assisted versus video-assisted dispatcher-instructed bystander cardiopulmonary resuscitation: A systematic review and meta-analysis.

    PubMed

    Lin, Yu-You; Chiang, Wen-Chu; Hsieh, Ming-Ju; Sun, Jen-Tang; Chang, Yi-Chung; Ma, Matthew Huei-Ming

    2018-02-01

    This study aimed to conduct a systematic review and meta-analysis comparing the effect of video-assistance and audio-assistance on quality of dispatcher-instructed cardiopulmonary resuscitation (DI-CPR) for bystanders. Five databases were searched, including PubMed, Cochrane library, Embase, Scopus and NIH clinical trial, to find randomized control trials published before June 2017. Qualitative analysis and meta-analysis were undertaken to examine the difference between the quality of video-instructed and audio-instructed dispatcher-instructed bystander CPR. The database search yielded 929 records, resulting in the inclusion of 9 relevant articles in this study. Of these, 6 were included in the meta-analysis. Initiation of chest compressions was slower in the video-instructed group than in the audio-instructed group (median delay 31.5 s; 95% CI: 10.94-52.09). The difference in the number of chest compressions per minute between the groups was 19.9 (95% CI: 10.50-29.38) with significantly faster compressions in the video-instructed group than in the audio-instructed group (104.8 vs. 80.6). The odds ratio (OR) for correct hand positioning was 0.8 (95% CI: 0.53-1.30) when comparing the audio-instructed and video-instructed groups. The differences in chest compression depth (mm) and time to first ventilation (seconds) between the video-instructed group and audio-instructed group were 1.6 mm (95% CI: -8.75, 5.55) and 7.5 s (95% CI: -56.84, 71.80), respectively. Video-instructed DI-CPR significantly improved the chest compression rate compared to the audio-instructed method, and a trend for correctness of hand position was also observed. However, this method caused a delay in the commencement of bystander-initiated CPR in the simulation setting. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Analysis of the Sources of Islamic Extremism

    DTIC Science & Technology

    2007-06-15

    UN 59th General Assembly and Turkish Prime Minister Erdogan was named the cosponsor. The AOC initiative analyzed the rise in cross-cultural...the founder of al-Qaida, has identified his grievances through audio and video tapes , statements released to the media and through interviews. Bruce

  2. An Overview of Audacity

    ERIC Educational Resources Information Center

    Thompson, Douglas Earl

    2014-01-01

    This article is an overview of the open source audio-editing and -recording program, Audacity. Key features are noted, along with significant features not included in the program. A number of music and music technology concepts are identified that could be taught and/or reinforced through using Audacity.

  3. Scabies: Workplace Frequently Asked Questions (FAQs)

    MedlinePlus

    ... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file RIS file Page last reviewed: July 19, 2013 Page last updated: July 19, 2013 Content source: ...

  4. A compact electroencephalogram recording device with integrated audio stimulation system.

    PubMed

    Paukkunen, Antti K O; Kurttio, Anttu A; Leminen, Miika M; Sepponen, Raimo E

    2010-06-01

    A compact (96 x 128 x 32 mm(3), 374 g), battery-powered, eight-channel electroencephalogram recording device with an integrated audio stimulation system and a wireless interface is presented. The recording device is capable of producing high-quality data, while the operating time is also reasonable for evoked potential studies. The effective measurement resolution is about 4 nV at 200 Hz sample rate, typical noise level is below 0.7 microV(rms) at 0.16-70 Hz, and the estimated operating time is 1.5 h. An embedded audio decoder circuit reads and plays wave sound files stored on a memory card. The activities are controlled by an 8 bit main control unit which allows accurate timing of the stimuli. The interstimulus interval jitter measured is less than 1 ms. Wireless communication is made through bluetooth and the data recorded are transmitted to an external personal computer (PC) interface in real time. The PC interface is implemented with LABVIEW and in addition to data acquisition it also allows online signal processing, data storage, and control of measurement activities such as contact impedance measurement, for example. The practical application of the device is demonstrated in mismatch negativity experiment with three test subjects.

  5. A compact electroencephalogram recording device with integrated audio stimulation system

    NASA Astrophysics Data System (ADS)

    Paukkunen, Antti K. O.; Kurttio, Anttu A.; Leminen, Miika M.; Sepponen, Raimo E.

    2010-06-01

    A compact (96×128×32 mm3, 374 g), battery-powered, eight-channel electroencephalogram recording device with an integrated audio stimulation system and a wireless interface is presented. The recording device is capable of producing high-quality data, while the operating time is also reasonable for evoked potential studies. The effective measurement resolution is about 4 nV at 200 Hz sample rate, typical noise level is below 0.7 μVrms at 0.16-70 Hz, and the estimated operating time is 1.5 h. An embedded audio decoder circuit reads and plays wave sound files stored on a memory card. The activities are controlled by an 8 bit main control unit which allows accurate timing of the stimuli. The interstimulus interval jitter measured is less than 1 ms. Wireless communication is made through bluetooth and the data recorded are transmitted to an external personal computer (PC) interface in real time. The PC interface is implemented with LABVIEW® and in addition to data acquisition it also allows online signal processing, data storage, and control of measurement activities such as contact impedance measurement, for example. The practical application of the device is demonstrated in mismatch negativity experiment with three test subjects.

  6. Pre-recorded instructional audio vs. dispatchers' conversational assistance in telephone cardiopulmonary resuscitation: A randomized controlled simulation study.

    PubMed

    Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey

    2018-01-01

    To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio ( n =57); or (2) verbal dispatcher assistance ( n =52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P >0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P <0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min -1 ) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.

  7. Pre-recorded instructional audio vs. dispatchers’ conversational assistance in telephone cardiopulmonary resuscitation: A randomized controlled simulation study

    PubMed Central

    Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey

    2018-01-01

    BACKGROUND: To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. METHODS: It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio (n=57); or (2) verbal dispatcher assistance (n=52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. RESULTS: There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P>0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P<0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min-1) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). CONCLUSION: When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.

  8. A system to simulate and reproduce audio-visual environments for spatial hearing research.

    PubMed

    Seeber, Bernhard U; Kerber, Stefan; Hafter, Ervin R

    2010-02-01

    The article reports the experience gained from two implementations of the "Simulated Open-Field Environment" (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a "Swiss army knife" tool for auditory, spatial hearing and audio-visual research. Crown Copyright 2009. Published by Elsevier B.V. All rights reserved.

  9. A System to Simulate and Reproduce Audio-Visual Environments for Spatial Hearing Research

    PubMed Central

    Seeber, Bernhard U.; Kerber, Stefan; Hafter, Ervin R.

    2009-01-01

    The article reports the experience gained from two implementations of the “Simulated Open-Field Environment” (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a “Swiss army knife” tool for auditory, spatial hearing and audio-visual research. PMID:19909802

  10. Validation of air traffic controller workload models

    DOT National Transportation Integrated Search

    1979-09-01

    During the past several years, computer models have been developed for off-site : estimat ion of control ler's workload. The inputs to these models are audio and : digital data normally recorded at an Air Route Traffic Control Center (ARTCC). : This ...

  11. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  12. When patients take the initiative to audio-record a clinical consultation.

    PubMed

    van Bruinessen, Inge Renske; Leegwater, Brigit; van Dulmen, Sandra

    2017-08-01

    to get insight into healthcare professionals' current experience with, and views on consultation audio-recordings made on patients' initiative. 215 Dutch healthcare professionals (123 physicians and 92 nurses) working in oncology care completed a survey inquiring their experiences and views. 71% of the respondents had experience with the consultation audio-recordings. Healthcare professionals who are in favour of the use of audio-recordings seem to embrace the evidence-based benefits for patients of listing back to a consultation again, and mention the positive influence on their patients. Opposing arguments relate to the belief that is confusing for patients or that it increases the chance that information is misinterpreted. Also the lack of control they have over the recording (fear for misuse), uncertainty about the medico-legal status, inhibiting influence on the communication process and feeling of distrust was mentioned. For almost one quarter of respondents these arguments and concerns were reason enough not to cooperate at all (9%), to cooperate only in certain cases (4%) or led to doubts about cooperation (9%). the many concerns that exist among healthcare professionals need to be tackled in order to increase transparency, as audio-recordings are expected to be used increasingly. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Efficient Geometric Sound Propagation Using Visibility Culling

    NASA Astrophysics Data System (ADS)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.

  14. FastStats: Chronic Liver Disease and Cirrhosis

    MedlinePlus

    ... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file RIS file Page last reviewed: May 30, 2013 Page last updated: October 6, 2016 Content source: ...

  15. 78 FR 37785 - Foreign-Trade Zone (FTZ) 196-Fort Worth, Texas; Notification of Proposed Production Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-24

    ... carrying cases, wrist straps, screws, power supplies, nickel/ cadmium batteries, lithium/ion batteries, other batteries, antenna assemblies, audio flex assemblies, bridge flex assemblies, interplex assembly... components and materials sourced from abroad include: labels, battery adhesives, decals, Kevlar protective...

  16. Improved Pseudo-section Representation for CSAMT Data in Geothermal Exploration

    NASA Astrophysics Data System (ADS)

    Grandis, Hendra; Sumintadireja, Prihadi

    2017-04-01

    Controlled-Source Audio-frequency Magnetotellurics (CSAMT) is a frequency domain sounding technique employing typically a grounded electric dipole as the primary electromagnetic (EM) source to infer the subsurface resistivity distribution. The use of an artificial source provides coherent signals with higher signal-to-noise ratio and overcomes the problems with randomness and fluctuation of the natural EM fields used in MT. However, being an extension of MT, the CSAMT data still uses apparent resistivity and phase for data representation. The finite transmitter-receiver distance in CSAMT leads to a somewhat “distorted” response of the subsurface compared to MT data. We propose a simple technique to present CSAMT data as an apparent resistivity pseudo-section with more meaningful information for qualitative interpretation. Tests with synthetic and field CSAMT data showed that the simple technique is valid only for sounding curves exhibiting a transition from high - low - high resistivity (i.e. H-type) prevailing in data from a geothermal prospect. For quantitative interpretation, we recommend the use of the full-solution of CSAMT modelling since our technique is not valid for more general cases.

  17. Loudspeaker line array educational demonstration.

    PubMed

    Anderson, Brian E; Moser, Brad; Gee, Kent L

    2012-03-01

    This paper presents a physical demonstration of an audio-range line array used to teach interference of multiple sources in a classroom or laboratory exercise setting. Software has been developed that permits real-time control and steering of the array. The graphical interface permits a user to vary the frequency, the angular response by phase shading, and reduce sidelobes through amplitude shading. An inexpensive, eight-element loudspeaker array has been constructed to test the control program. Directivity measurements of this array in an anechoic chamber and in a large classroom are presented. These measurements have good agreement with theoretical directivity predictions, thereby allowing its use as a quantitative learning tool for advanced students as well as a qualitative demonstration of arrays in other settings. Portions of this paper are directed toward educators who may wish to implement a similar demonstration for their advanced undergraduate or graduate level course in acoustics. © 2012 Acoustical Society of America

  18. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    NASA Astrophysics Data System (ADS)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  19. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  20. Judging the similarity of soundscapes does not require categorization: evidence from spliced stimuli.

    PubMed

    Aucouturier, Jean-Julien; Defreville, Boris

    2009-04-01

    This study uses an audio signal transformation, splicing, to create an experimental situation where human listeners judge the similarity of audio signals, which they cannot easily categorize. Splicing works by segmenting audio signals into 50-ms frames, then shuffling and concatenating these frames back in random order. Splicing a signal masks the identification of the categories that it normally elicits: For instance, human participants cannot easily identify the sound of cars in a spliced recording of a city street. This study compares human performance on both normal and spliced recordings of soundscapes and music. Splicing is found to degrade human similarity performance significantly less for soundscapes than for music: When two spliced soundscapes are judged similar to one another, the original recordings also tend to sound similar. This establishes that humans are capable of reconstructing consistent similarity relations between soundscapes without relying much on the identification of the natural categories associated with such signals, such as their constituent sound sources. This finding contradicts previous literature and points to new ways to conceptualize the different ways in which humans perceive soundscapes and music.

  1. Forward modeling and inversion of tensor CSAMT in 3D anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Kun-Peng; Tan, Han-Dong

    2017-12-01

    Tensor controlled-source audio-frequency magnetotellurics (CSAMT) can yield information about electric and magnetic fields owing to its multi-transmitter configuration compared with the common scalar CSAMT. The most current theories, numerical simulations, and inversion of tensor CSAMT are based on far-field measurements and the assumption that underground media have isotropic resistivity. We adopt a three-dimensional (3D) staggered-grid finite difference numerical simulation method to analyze the resistivity in axial anisotropic and isotropic media. We further adopt the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) method to perform 3D tensor CSAMT axial anisotropic inversion. The inversion results suggest that when the underground structure is anisotropic, the isotropic inversion will introduce errors to the interpretation.

  2. Where is the hot rock and where is the ground water – Using CSAMT to map beneath and around Mount St. Helens

    USGS Publications Warehouse

    Wynn, Jeff; Mosbrucker, Adam; Pierce, Herbert; Spicer, Kurt R.

    2016-01-01

    We have observed several new features in recent controlled-source audio-frequency magnetotelluric (CSAMT) soundings on and around Mount St. Helens, Washington State, USA. We have identified the approximate location of a strong electrical conductor at the edges of and beneath the 2004–08 dome. We interpret this conductor to be hot brine at the hot-intrusive-cold-rock interface. This contact can be found within 50 meters of the receiver station on Spine 5, which extruded between April and July of 2005. We have also mapped separate regional and glacier-dome aquifers, which lie one atop the other, out to considerable distances from the volcano.

  3. Field test of electromagnetic geophysical techniques for locating simulated in situ mining leach solution. Report of investigations/1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweeton, D.R.; Hanson, J.C.; Friedel, M.J.

    1994-01-01

    The U.S. Bureau of Mines, the University of Arizona, Sandia National Laboratory, and Zonge Engineering and Research, Inc., conducted cooperative field tests of six electromagnetic geophysical methods to compare their effectiveness in locating a brine solution simulating in situ leach solution or a high-conductivity plume of contamination. The brine was approximately 160 meters below the surface. The test site was the University's San Xavier experimental mine near Tucson, Arizona. Geophysical surveys using surface and surface-borehole time-domain electromagnetics (TEM), surface controlled source audio-frequency magnetotellurics (CSAMT), surface-borehole frequency-domain electromagnetics (FEM), crosshole FEM and surface magnetic field ellipticity were conducted before and duringmore » brine injection.« less

  4. Lagrange constraint neural network for audio varying BSS

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.

    2002-03-01

    Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).

  5. An assessment of individualized technical ear training for audio production.

    PubMed

    Kim, Sungyoung

    2015-07-01

    An individualized technical ear training method is compared to a non-individualized method. The efficacy of the individualized method is assessed using a standardized test conducted before and after the training period. Participants who received individualized training improved better than the control group on the test. Results indicate the importance of individualized training for acquisition of spectrum-identification and spectrum-matching skills. Individualized training, therefore, should be implemented by default into technical ear training programs used in audio production industry and education.

  6. Maximization of the directivity ratio with the desired audible gain level for broadband design of near field loudspeaker arrays

    NASA Astrophysics Data System (ADS)

    Kim, Daesung; Kim, Kihyun; Wang, Semyung; Lee, Sung Q.; Crocker, Malcolm J.

    2011-11-01

    This paper mainly addresses design methods for near field loudspeaker arrays. These methods have been studied recently since they can be used to realize a personal audio space without the use of headphones. From a practical view point, they can also be used to form a directional sound beam within a short distance from the sources especially using a linear loudspeaker array. In this regard, we re-analyzed the previous near field beamforming methods in order to obtain a comprehensive near field beamforming formulation. Broadband directivity control is proposed for multi-objective optimization, which maximizes the directivity with the desired gain, where both the directivity and the gain are commonly used array performance measures. This method of control aims to form a directive sound beam within a short distance while widening the frequency range of the beamforming. Simulation and experimental results demonstrate that broadband directivity control achieves higher directivity and gain over our whole frequency range of interest compared with previous beamforming methods.

  7. Telecommunications in Higher Education: Creating New Information Sources.

    ERIC Educational Resources Information Center

    Brown, Fred D.

    1986-01-01

    Discusses the telecommunications systems in operation at Buena Vista College in Iowa. Describes the systems' uses in linking all offices and classrooms on the campus, downlinking satellite communications through a dish, transmitting audio and video information to any set of defined studio or classroom space, and teleconferencing. (TW)

  8. Synthetic Modeling of A Geothermal System Using Audio-magnetotelluric (AMT) and Magnetotelluric (MT)

    NASA Astrophysics Data System (ADS)

    Mega Saputra, Rifki; Widodo

    2017-04-01

    Indonesia has 40% of the world’s potential geothermal resources with estimated capacity of 28,910 MW. Generally, the characteristic of the geothermal system in Indonesia is liquid-dominated systems, which driven by volcanic activities. In geothermal exploration, electromagnetic methods are used to map structures that could host potential reservoirs and source rocks. We want to know the responses of a geothermal system using synthetic data of Audio-magnetotelluric (AMT) and Magnetotelluric (MT). Due to frequency range, AMT and MT data can resolve the shallow and deeper structure, respectively. 1-D models have been performed using AMT and MT data. The results indicate that AMT and MT data give detailed conductivity distribution of geothermal structure.

  9. APPLICATION OF AUDIO-MAGNETOTELLURIC SURVEYS ON SAO MIGUEL ISLAND, AZORES PORTUGAL.

    USGS Publications Warehouse

    Hoover, Donald; Rodrigues Da Silva, A.; Pierce, Herbert A.; Amaral, Roberto

    1984-01-01

    Geothermal exploration and development has been under way on Sao Miguel Island, Azores since 1975. This work had been restricted to the Fogo volcano, one of three dormant silicic volcanic centers on the island. The USGS in 1982 and 1983 conducted reconnaissance natural-source audio-magnetotelluric (AMT) surveys of all three silicic centers to evaluate the potential for geothermal systems at each and to demonstrate the utility of the method in areas of difficult terrain. Results on Fogo showed a low resistivity trend extending from the present production area upslope to the caldera boundary. The upper part of this trend is the upwelling zone of a thermal plume which supplies the production area. Further exploration and drilling are now planned for this area.

  10. Recognition and characterization of unstructured environmental sounds

    NASA Astrophysics Data System (ADS)

    Chu, Selina

    2011-12-01

    Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to exploit and label new unlabeled audio data. The final components of my thesis will involve investigating on learning sound structures for generalization and applying the proposed ideas to context aware applications. The inherent nature of environmental sound is noisy and contains relatively large amounts of overlapping events between different environments. Environmental sounds contain large variances even within a single environment type, and frequently, there are no divisible or clear boundaries between some types. Traditional methods of classification are generally not robust enough to handle classes with overlaps. This audio, hence, requires representation by complex models. Using deep learning architecture provides a way to obtain a generative model-based method for classification. Specifically, I considered the use of Deep Belief Networks (DBNs) to model environmental audio and investigate its applicability with noisy data to improve robustness and generalization. A framework was proposed using composite-DBNs to discover high-level representations and to learn a hierarchical structure for different acoustic environments in a data-driven fashion. Experimental results on real data sets demonstrate its effectiveness over traditional methods with over 90% accuracy on recognition for a high number of environmental sound types.

  11. A Prospective, Randomized Trial in the Emergency Department of Suggestive Audio-Therapy under Deep Sedation for Smoking Cessation.

    PubMed

    Rodriguez, Robert M; Taylor, Opal; Shah, Sushma; Urstein, Susan

    2007-08-01

    In a sample of patients undergoing procedural deep sedation in the emergency department (ED), we conducted a prospective, randomized, single-blinded trial of audio-therapy for smoking cessation. We asked subjects about their smoking, including desire to quit (0-10 numerical scale) and number of cigarettes smoked per day. Subjects were randomized to either a control tape (music alone) or a tape with repeated smoking-cessation messages over music. Tapes were started with first doses of sedation and stopped with patient arousal. Telephone follow-up occurred between two weeks and three months to assess the number of cigarettes smoked per day. Study endpoints were self-reported complete cessation and decrease of half or more in total cigarettes smoked per day. One hundred eleven patients were enrolled in the study, 54 to intervention and 57 to control. Mean desire to quit was 7.15 +/- 2.6 and mean cigarettes per day was 17.5 +/- 12.1. We successfully contacted 69 (62%) patients. Twenty-seven percent of intervention and 26% of control patients quit (mean difference = 1%; 95% CI: -22.0% to 18.8%). Thirty-seven percent of intervention and 51% of control patients decreased smoking by half or more (mean difference = 14.6%; 95% CI: -8.7% to 35.6%). Suggestive audio-therapy delivered during deep sedation in the ED did not significantly decrease self-reported smoking behavior.

  12. 47 CFR 95.669 - External controls.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Audio frequency power amplifier output connector and selector switch. (5) On-off switch for primary power to transmitter. This switch may be combined with receiver controls such as the receiver on-off switch and volume control. (6) Upper/lower sideband selector switch (for a transmitter that transmits...

  13. Enlarged temporal integration window in schizophrenia indicated by the double-flash illusion.

    PubMed

    Haß, Katharina; Sinke, Christopher; Reese, Tanya; Roy, Mandy; Wiswede, Daniel; Dillo, Wolfgang; Oranje, Bob; Szycik, Gregor R

    2017-03-01

    In the present study we were interested in the processing of audio-visual integration in schizophrenia compared to healthy controls. The amount of sound-induced double-flash illusions served as an indicator of audio-visual integration. We expected an altered integration as well as a different window of temporal integration for patients. Fifteen schizophrenia patients and 15 healthy volunteers matched for age and gender were included in this study. We used stimuli with eight different temporal delays (stimulus onset asynchronys (SOAs) 25, 50, 75, 100, 125, 150, 200 and 300 ms) to induce a double-flash illusion. Group differences and the widths of temporal integration windows were calculated on percentages of reported double-flash illusions. Patients showed significantly more illusions (ca. 36-44% vs. 9-16% in control subjects) for SOAs 150-300. The temporal integration window for control participants went from SOAs 25 to 200 whereas for patients integration was found across all included temporal delays. We found no significant relationship between the amount of illusions and either illness severity, chlorpromazine equivalent doses or duration of illness in patients. Our results are interpreted in favour of an enlarged temporal integration window for audio-visual stimuli in schizophrenia patients, which is consistent with previous research.

  14. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2016-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.

  15. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2017-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529

  16. Automated Categorization Scheme for Digital Libraries in Distance Learning: A Pattern Recognition Approach

    ERIC Educational Resources Information Center

    Gunal, Serkan

    2008-01-01

    Digital libraries play a crucial role in distance learning. Nowadays, they are one of the fundamental information sources for the students enrolled in this learning system. These libraries contain huge amount of instructional data (text, audio and video) offered by the distance learning program. Organization of the digital libraries is…

  17. Key Challenges of Using Video When Investigating Social Practices in Education: Contextualization, Magnification, and Representation

    ERIC Educational Resources Information Center

    Blikstad-Balas, Marte

    2017-01-01

    Audio- and video-recordings are increasingly popular data sources in contemporary qualitative research, making discussions about methodological implications of such recordings timelier than ever. This article goes beyond discussing practical issues and issues of "camera effect" and reactivity to identify three major challenges of using…

  18. "The Source": An Alternate Reality Game to Spark STEM Interest and Learning among Underrepresented Youth

    ERIC Educational Resources Information Center

    Gilliam, Melissa; Bouris, Alida; Hill, Brandon; Jagoda, Patrick

    2016-01-01

    Alternate Reality Games (ARGs) are multiplayer role-playing games that use the real world as their primary platform and incorporate a range of media, including video, audio, email, mobile technologies, websites, live performance, and social networks. This paper describes the development, implementation, and player reception of "The…

  19. Exploring 21st Century Literacy through Writing: Urban Educators' Use of Digital Storytelling with Struggling Writers

    ERIC Educational Resources Information Center

    Phillips, Sarah A.

    2017-01-01

    This qualitative study explored the lived experiences of urban educators' integration of digital storytelling into supporting instruction, developing students' writing potential, and engaging students that struggle with writing literacy. Interviews, focus group interviews, review of documents, and audio-visual resources were the sources of data…

  20. Audiovisual Materials for the Engineering Technologies.

    ERIC Educational Resources Information Center

    O'Brien, Janet S., Comp.

    A list of audiovisual materials suitable for use in engineering technology courses is provided. This list includes titles of 16mm films, 8mm film loops, slidetapes, transparencies, audio tapes, and videotapes. Given for each title are: source, format, length of film or tape or number of slides or transparencies, whether color or black-and-white,…

  1. Soprano and source: A laryngographic analysis

    NASA Astrophysics Data System (ADS)

    Bateman, Laura Anne

    2005-04-01

    Popular music in the 21st century uses a particular singing quality for female voice that is quite different from the trained classical singing quality. Classical quality has been the subject of a vast body of research, whereas research that deals with non-classical qualities is limited. In order to learn more about these issues, the author chose to do research on singing qualities using a variety of standard voice quality tests. This paper looks at voice qualities found in various different styles of singing: Classical, Belt, Legit, R&B, Jazz, Country, and Pop. The data was elicited from a professional soprano and the voice qualities reflect industry standards. The data set for this paper is limited to samples using the vowel [i]. Laryngographic (LGG) data was generated simultaneously with the audio samples. This paper will focus on the results of the LGG analysis; however, an audio analysis was also performed using Spectrogram, LPC, and FFT. Data from the LGG is used to calculate the contact quotient, speed quotient, and ascending slope. The LGG waveform is also visually assessed. The LGG analysis gives insights into the source vibration for the different singing styles.

  2. ''1/f noise'' in music: Music from 1/f noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voss, R.F.; Clarke, J.

    1978-01-01

    The spectral density of fluctuations in the audio power of many musical selections and of English speech varies approximately as 1/f (f is the frequency) down to a frequency of 5 x 10/sup -4/ Hz. This result implies that the audio-power fluctuations are correlated over all times in the same manner as ''1/f noise'' in electronic components. The frequency fluctuations of music also have a 1/f spectral density at frequencies down to the inverse of the length of the piece of music. The frequency fluctuations of English speech have a quite different behavior, with a single characteristic time of aboutmore » 0.1 s, the average length of a syllable. The observations on music suggest that 1/f noise is a good choice for stochastic composition. Compositions in which the frequency and duration of each note were determined by 1/f noise sources sounded pleasing. Those generated by white-noise sources sounded too random, while those generated by 1/f/sup 2/ noise sounded too correlated.« less

  3. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  4. Establishing a gold standard for manual cough counting: video versus digital audio recordings

    PubMed Central

    Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A

    2006-01-01

    Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019

  5. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  6. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-10-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.

  7. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed Central

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-01-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444

  8. Fundamentals of dielectric properties measurements and agricultural applications.

    PubMed

    Nelson, Stuart O

    2010-01-01

    Dielectrics and dielectric properties are defined generally and dielectric measurement methods and equipment are described for various frequency ranges from audio frequencies through microwave frequencies. These include impedance and admittance bridges, resonant frequency, transmission-line, and free-space methods in the frequency domain and time-domain and broadband techniques. Many references are cited describing methods in detail and giving sources of dielectric properties data. Finally a few applications for such data are presented and sources of tabulated and dielectric properties data bases are identified.

  9. Speed control for synchronous motors

    NASA Technical Reports Server (NTRS)

    Packard, H.; Schott, J.

    1981-01-01

    Feedback circuit controls fluctuations in speed of synchronous ac motor. Voltage proportional to phase angle is developed by phase detector, rectified, amplified, compared to threshold, and reapplied positively or negatively to motor excitation circuit. Speed control reduces wow and flutter of audio turntables and tape recorders, and enhances hunting in gyroscope motors.

  10. A Precision, Low-Cost GPS-Based Transmitter Synchronization Scheme for Improved AM Reception

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Stephen Fulton; Moore, Anthony

    2009-01-01

    This paper describes a highly accurate carrier-frequency synchronization scheme for actively, automatically locking multiple, remotely located AM broadcast transmitters to a common frequency/timing reference source such as GPS. The extremely tight frequency lock (to {approx}1 part in 10{sup 9} or better) permits the effective elimination of audible and even sub-audible beats between the local (desired) station's carrier signal and the distant stations carriers, usually received via skywave propagation during the evening and nighttime hours. These carrier-beat components cause annoying modulations of the desired station's audio at the receiver and concurrent distortion of the audio modulation from the distant station(s) andmore » often cause listeners to ldquotune outrdquo due to the low reception quality. Significant reduction or elimination of the beats and related effects will greatly enlarge the effective (interference-limited) listening area of the desired station (from 4 to 10 times as indicated in our tests) and simultaneously reduce the corresponding interference of the local transmitter to the distant stations as well. In addition, AM stereo (CQUAM) reception will be particularly improved by minimizing the phase shifts induced by co-channel interfering signals; hybrid digital (HD) signals will also benefit via reduction in beats from analog signals. The automatic frequency-control hardware described is inexpensive ($1000-$2000), requires no periodic recalibration, has essentially zero long-term drift, and could employ alternate wide-area frequency references of suitable accuracy, including broadcasts from WWVB, LORAN-C, and equivalent sources. The basic configuration of the GPS-disciplined oscillator which solves this problem is extremely simple. The main oscillator is a conventional high-stability quartz-crystal type. To counter long- term drifts, the oscillator is slightly adjusted to track a high-precision source of standard frequency obtained from a specialized GPS receiver (or other source), usually at 10.000 MHz. This very stable local reference frequency is then used as a clock for a standard digitally implemented frequency synthesizer, which is programmed to generate the specific carrier frequency desired. The stability of the disciplining source, typically {approx}1 part in 10{sup 9} to 10{sup 11}, is thus transferred to the final AM transmitter carrier output frequency.« less

  11. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environmentsa)

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585

  12. Audio-Tactile Integration in Congenitally and Late Deaf Cochlear Implant Users

    PubMed Central

    Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2014-01-01

    Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be “rewired” through auditory reafferentation. PMID:24918766

  13. Audio-tactile integration in congenitally and late deaf cochlear implant users.

    PubMed

    Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2014-01-01

    Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation.

  14. Audio in Courseware: Design Knowledge Issues.

    ERIC Educational Resources Information Center

    Aarntzen, Diana

    1993-01-01

    Considers issues that need to be addressed when incorporating audio in courseware design. Topics discussed include functions of audio in courseware; the relationship between auditive and visual information; learner characteristics in relation to audio; events of instruction; and audio characteristics, including interactivity and speech technology.…

  15. A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Shrum, Richard; Miller, Joel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Our work in virtual reality systems at NASA Ames Research Center includes the area of aurally-guided visual search, using specially-designed audio cues and spatial audio processing (also known as virtual or "3-D audio") techniques (Begault, 1994). Previous studies at Ames had revealed that use of 3-D audio for Traffic Collision Avoidance System (TCAS) advisories significantly reduced head-down time, compared to a head-down map display (0.5 sec advantage) or no display at all (2.2 sec advantage) (Begault, 1993, 1995; Begault & Pittman, 1994; see Wenzel, 1994, for an audio demo). Since the crew must keep their head up and looking out the window as much as possible when taxiing under low-visibility conditions, and the potential for "blunder" is increased under such conditions, it was sensible to evaluate the audio spatial cueing for a prototype audio ground collision avoidance warning (GCAW) system, and a 3-D audio guidance system. Results were favorable for GCAW, but not for the audio guidance system.

  16. Acoustic Network Localization and Interpretation of Infrasonic Pulses from Lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, R. O.; Johnson, J. B.; Badillo, E.; Michnovicz, J. C.; Thomas, R. J.; Edens, H. E.; Rison, W.

    2011-12-01

    We improve on the localization accuracy of thunder sources and identify infrasonic pulses that are correlated across a network of acoustic arrays. We attribute these pulses to electrostatic charge relaxation (collapse of the electric field) and attempt to model their spatial extent and acoustic source strength. Toward this objective we have developed a single audio range (20-15,000 Hz) acoustic array and a 4-station network of broadband (0.01-500 Hz) microphone arrays with aperture of ~45 m. The network has an aperture of 1700 m and was installed during the summers of 2009-2011 in the Magdalena mountains of New Mexico, an area that is subject to frequent lightning activity. We are exploring a new technique based on inverse theory that integrates information from the audio range and the network of broadband acoustic arrays to locate thunder sources more accurately than can be achieved with a single array. We evaluate the performance of the technique by comparing the location of thunder sources with RF sources located by the lightning mapping array (LMA) of Langmuir Laboratory at New Mexico Tech. We will show results of this technique for lightning flashes that occurred in the vicinity of our network of acoustic arrays and over the LMA. We will use acoustic network detection of infrasonic pulses together with LMA data and electric field measurements to estimate the spatial distribution of the charge (within the cloud) that is used to produce a lightning flash, and will try to quantify volumetric charges (charge magnitude) within clouds.

  17. Using an Acoustic System to Estimate the Timing and Magnitude of Ebullition Release from Wetland Ecosystems

    NASA Astrophysics Data System (ADS)

    Varner, R. K.; Palace, M. W.; Lennartz, J. M.; Crill, P. M.; Wik, M.; Amante, J.; Dorich, C.; Harden, J. W.; Ewing, S. A.; Turetsky, M. R.

    2011-12-01

    Knowledge of the magnitude and frequency of methane release through ebullition (bubbling) in water saturated ecosystems such as bogs, fens and lakes is important to both the atmospheric and ecosystems science community. The controls on episodic bubble releases must be identified in order to understand the response of these ecosystems to future climate forcing. We have developed and field tested an inexpensive array of sampling/monitoring instruments to identify the frequency and magnitude of bubbling events which allows us to correlate bubble data with potential drivers such as changes in hydrostatic pressure, wind and temperature. A prototype ebullition sensor has been developed and field tested at Sallie's Fen in New Hampshire, USA. The instrument consists of a nested, inverted funnel design with a hydrophone for detecting bubbles rising through the peat, that hit the microphone. The design also offers a way to sample the gases collected from the funnels to determine the concentration of CH4. Laboratory calibration of the instrument resulted in an equation that relates frequency of bubbles hitting the microphone with bubble volume. After calibration in the laboratory, the prototype was deployed in Sallie's Fen in late August 2010. An additional four instruments were deployed the following month. Audio data was recorded continuously using a digital audio recorder attached to two ebullition sensors. Audio was recorded as an mp3 compressed audio file at a sample rate of 160 kbits/sec. Using this format and stereo input, allowing for two sensors to be recorded with each device, we were able to record continuously for 20 days. Audio was converted to uncompressed audio files for speed in computation. Audio data was processed using MATLAB, searching in 0.5 second incremental sections for specific fundamental frequencies that are related to our calibrated audio events. Time, fundamental frequency, and estimated bubble size were output to a text file for analysis in statistical software. In addition, each event was cut out of the longer audio file and placed in a directory with number of ebullition event, sensor number, and time, allowing for manual interpretation of the ebullition event. After successful laboratory and local field testing, our instruments were deployed in summer 2011 at a temperate fen (Sallie's Fen, NH, USA), a subarctic mire and lake (Stordalen, Abisko, Sweden) and two locations in subarctic Alaska (APEX Research Site, Fairbanks, AK and Innoko National Wildlife Refuge). Ebullition occurred at regular intervals. Our results indicate that this is a useful method for monitoring CH4 ebullitive flux at high temporal frequencies.

  18. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  19. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  20. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  1. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Digital audio broadcasting service requirements... SERVICES RADIO BROADCAST SERVICES Digital Audio Broadcasting § 73.403 Digital audio broadcasting service requirements. (a) Broadcast radio stations using IBOC must transmit at least one over-the-air digital audio...

  2. PBF Control Building (PER619). Interior detail of control room's severe ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF Control Building (PER-619). Interior detail of control room's severe fuel damage instrument panel. Indicators provided real-time information about test underway in PBF reactor. Note audio speaker. Date: May 2004. INEEL negative no, HD-41-7-4 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  3. AH-64D Apache Longbow Aircrew Workload Assessment for Unmanned Aerial System (UAS) Employment

    DTIC Science & Technology

    2009-01-01

    data sources, gathering and maintaining the data needed, and completing and reviewing the collection information. Send comments regarding this burden...a head-eye tracker), audio-video, and tactics, techniques and procedures data were collected and analyzed. Pilot workload was found to be...5 2. Method 6 2.1 Data Collection

  4. Growing the Good Stuff: One Literacy Coach's Approach to Support Teachers with High-Stakes Testing

    ERIC Educational Resources Information Center

    Zoch, Melody

    2015-01-01

    This ethnographic study reports on one elementary literacy coach's response to high-stakes testing and her approach to support third- through fifth-grade teachers in a Title I school in Texas. Sources of data included field notes and observations of classes and meetings, audio/video recordings, and transcribed interviews. The findings illustrate…

  5. A Methodological Approach to Support Collaborative Media Creation in an E-Learning Higher Education Context

    ERIC Educational Resources Information Center

    Ornellas, Adriana; Muñoz Carril, Pablo César

    2014-01-01

    This article outlines a methodological approach to the creation, production and dissemination of online collaborative audio-visual projects, using new social learning technologies and open-source video tools, which can be applied to any e-learning environment in higher education. The methodology was developed and used to design a course in the…

  6. Lexicogrammar in the International Construction Industry: A Corpus-Based Case Study of Japanese-Hong-Kongese On-Site Interactions in English

    ERIC Educational Resources Information Center

    Handford, Michael; Matous, Petr

    2011-01-01

    The purpose of this research is to identify and interpret statistically significant lexicogrammatical items that are used in on-site spoken communication in the international construction industry, initially through comparisons with reference corpora of everyday spoken and business language. Several data sources, including audio and video…

  7. Score-Informed Musical Source Separation and Reconstruction

    ERIC Educational Resources Information Center

    Han, Yushen

    2013-01-01

    A systematic approach to retrieve individual parts in a monaural music recording with its score is introduced. We are interested in isolating the accompaniment part by removing the solo part from a recording of concerto music in which a solo instrument is accompanied by an orchestra. We require the music audio, the score, and optionally a sample…

  8. The Influence of Argumentation on Understanding Nature of Science

    ERIC Educational Resources Information Center

    Boran, Gül Hanim; Bag, Hüseyin

    2016-01-01

    The aim in conducting this study is to explore the effects of argumentation on pre-service science teachers' views of the nature of science. This study used a qualitative case study and conducted with 20 pre-service science teachers. Data sources include an open-ended questionnaire and audio-taped interviews. According to pretest and posttest…

  9. The effect of music with and without binaural beat audio on operative anxiety in patients undergoing cataract surgery: a randomized controlled trial

    PubMed Central

    Wiwatwongwana, D; Vichitvejpaisal, P; Thaikruea, L; Klaphajone, J; Tantong, A; Wiwatwongwana, A

    2016-01-01

    Purpose To investigate the anxiolytic effects of binaural beat embedded audio in patients undergoing cataract surgery under local anesthesia. Methods This prospective RCT included 141 patients undergoing cataract surgery under local anesthesia. The patients were randomized into three groups; the Binaural beat music group (BB), the plain music intervention group (MI), and a control group (ear phones with no music). Blood pressure (BP) and heart rate were measured on admission, at the beginning of and 20 min after the start of the operation. Peri-operative anxiety level was assessed using the State-Trait Anxiety Inventory questionnaire (STAI). Results The BB and MI groups comprised 44 patients each and the control group 47. Patients in the MI group and BB group showed significant reduction of STAI state scores after music intervention compared with the control group (P<0.001) but the difference was not significant between the MI and BB group (STAI-S score MI group −7.0, BB group −9.0, P=0.085). Systolic BP was significantly lower in both MI (P=0.043) and BB (0.040) groups although there was no difference between the two groups (P=1.000). A significant reduction in heart rate was seen only in the BB group (BB vs control P=0.004, BB vs MI P=0.050, MI vs control P=0.303). Conclusion Music, both with and without binaural beat, was proven to decrease anxiety level and lower systolic BP. Patients who received binaural beat audio showed additional decrease in heart rate. Binaural beat embedded musical intervention may have benefit over musical intervention alone in decreasing operative anxiety. PMID:27740618

  10. The effect of music with and without binaural beat audio on operative anxiety in patients undergoing cataract surgery: a randomized controlled trial.

    PubMed

    Wiwatwongwana, D; Vichitvejpaisal, P; Thaikruea, L; Klaphajone, J; Tantong, A; Wiwatwongwana, A

    2016-11-01

    PurposeTo investigate the anxiolytic effects of binaural beat embedded audio in patients undergoing cataract surgery under local anesthesia.MethodsThis prospective RCT included 141 patients undergoing cataract surgery under local anesthesia. The patients were randomized into three groups; the Binaural beat music group (BB), the plain music intervention group (MI), and a control group (ear phones with no music). Blood pressure (BP) and heart rate were measured on admission, at the beginning of and 20 min after the start of the operation. Peri-operative anxiety level was assessed using the State-Trait Anxiety Inventory questionnaire (STAI).ResultsThe BB and MI groups comprised 44 patients each and the control group 47. Patients in the MI group and BB group showed significant reduction of STAI state scores after music intervention compared with the control group (P<0.001) but the difference was not significant between the MI and BB group (STAI-S score MI group -7.0, BB group -9.0, P=0.085). Systolic BP was significantly lower in both MI (P=0.043) and BB (0.040) groups although there was no difference between the two groups (P=1.000). A significant reduction in heart rate was seen only in the BB group (BB vs control P=0.004, BB vs MI P=0.050, MI vs control P=0.303).ConclusionMusic, both with and without binaural beat, was proven to decrease anxiety level and lower systolic BP. Patients who received binaural beat audio showed additional decrease in heart rate. Binaural beat embedded musical intervention may have benefit over musical intervention alone in decreasing operative anxiety.

  11. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  12. Secure videoconferencing equipment switching system and method

    DOEpatents

    Dirks, David H; Gomes, Diane; Stewart, Corbin J; Fischer, Robert A

    2013-04-30

    Examples of systems described herein include videoconferencing systems having audio/visual components coupled to a codec. The codec may be configured by a control system. Communication networks having different security levels may be alternately coupled to the codec following appropriate configuration by the control system. The control system may also be coupled to the communication networks.

  13. The power of digital audio in interactive instruction: An unexploited medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, J.; Trainor, M.

    1989-01-01

    Widespread use of audio in computer-based training (CBT) occurred with the advent of the interactive videodisc technology. This paper discusses the alternative of digital audio, which, unlike videodisc audio, enables one to rapidly revise the audio used in the CBT and which may be used in nonvideo CBT applications as well. We also discuss techniques used in audio script writing, editing, and production. Results from evaluations indicate a high degree of user satisfaction. 4 refs.

  14. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  15. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  16. 47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Message (EOM) codes using the EAS Protocol. The Attention Signal must precede any emergency audio message... audio messages. No Attention Signal is required for EAS messages that do not contain audio programming... EAS messages in the main audio channel. All DAB stations shall also transmit EAS messages on all audio...

  17. Communicative Competence in Audio Classrooms: A Position Paper for the CADE 1991 Conference.

    ERIC Educational Resources Information Center

    Burge, Liz

    Classroom practitioners need to move their attention away from the technological and logistical competencies required for audio conferencing (AC) to the required communicative competencies in order to advance their skills in handling the psychodynamics of audio virtual classrooms which include audio alone and audio with graphics. While the…

  18. The Audio Description as a Physics Teaching Tool

    ERIC Educational Resources Information Center

    Cozendey, Sabrina; Costa, Maria da Piedade

    2016-01-01

    This study analyses the use of audio description in teaching physics concepts, aiming to determine the variables that influence the understanding of the concept. One education resource was audio described. For make the audio description the screen was freezing. The video with and without audio description should be presented to students, so that…

  19. Sound reproduction in personal audio systems using the least-squares approach with acoustic contrast control constraint.

    PubMed

    Cai, Yefeng; Wu, Ming; Yang, Jun

    2014-02-01

    This paper describes a method for focusing the reproduced sound in the bright zone without disturbing other people in the dark zone in personal audio systems. The proposed method combines the least-squares and acoustic contrast criteria. A constrained parameter is introduced to tune the balance between two performance indices, namely, the acoustic contrast and the spatial average error. An efficient implementation of this method using convex optimization is presented. Offline simulations and real-time experiments using a linear loudspeaker array are conducted to evaluate the performance of the presented method. Results show that compared with the traditional acoustic contrast control method, the proposed method can improve the flatness of response in the bright zone by sacrificing the level of acoustic contrast.

  20. Visual communication and the content and style of conversation.

    PubMed

    Rutter, D R; Stephenson, G M; Dewey, M E

    1981-02-01

    Previous research suggests that visual communication plays a number of important roles in social interaction. In particular, it appears to influence the content of what people say in discussions, the style of their speech, and the outcomes they reach. However, the findings are based exclusively on comparisons between face-to-face conversations and audio conversations, in which subjects sit in separate rooms and speak over a microphone-headphone intercom which precludes visual communication. Interpretation is difficult, because visual communication is confounded with physical presence, which itself makes available certain cues denied to audio subjects. The purpose of this paper is to report two experiments in which the variables were separated and content and style were re-examined. The first made use of blind subjects, and again compared the face-to-face and audio conditions. The second returned to sighted subjects, and examined four experimental conditions: face-to-face; audio; a curtain condition in which subjects sat in the same room but without visual communication; and a video condition in which they sat in separate rooms and communicated over a television link. Neither visual communication nor physical presence proved to be critical variable. Instead, the two sources of cues combined, such that content and style were influenced by the aggregate of available cues. The more cueless the settings, the more task-oriented, depersonalized and unspontaneous the conversation. The findings also suggested that the primary effect of cuelessness is to influence verbal content, and that its influence on both style and outcome occurs indirectly, through the mediation of content.

  1. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  2. The Memory Jog Service

    NASA Astrophysics Data System (ADS)

    Dimakis, Nikolaos; Soldatos, John; Polymenakos, Lazaros; Sturm, Janienke; Neumann, Joachim; Casas, Josep R.

    The CHIL Memory Jog service focuses on facilitating the collaboration of participants in meetings, lectures, presentations, and other human interactive events, occurring in indoor CHIL spaces. It exploits the whole set of the perceptual components that have been developed by the CHIL Consortium partners (e.g., person tracking, face identification, audio source localization, etc) along with a wide range of actuating devices such as projectors, displays, targeted audio devices, speakers, etc. The underlying set of perceptual components provides a constant flow of elementary contextual information, such as “person at location x0,y0”, “speech at location x0,y0”, information that alone is not of significant use. However, the CHIL Memory Jog service is accompanied by powerful situation identification techniques that fuse all the incoming information and creates complex states that drive the actuating logic.

  3. Virtual classroom

    NASA Astrophysics Data System (ADS)

    Carlowicz, Michael

    After four decades of perfecting techniques for communication with spacecraft on the way to other worlds, space scientists are now working on new ways to reach students in this one. In a partnership between NASA and the University of North Dakota (UND), scientists and engineers from both institutions will soon lead an experiment in Internet learning.Starting January 22, UND will offer a threemonth computerized course in telerobotics. Using RealAudio and CU-SeeMe channels of the Internet to allow real-time transmission of video and audio, instructors will teach college-and graduate-level students the fundamentals of the remote operation and control of a robot.

  4. Secure videoconferencing equipment switching system and method

    DOEpatents

    Hansen, Michael E [Livermore, CA

    2009-01-13

    A switching system and method are provided to facilitate use of videoconference facilities over a plurality of security levels. The system includes a switch coupled to a plurality of codecs and communication networks. Audio/Visual peripheral components are connected to the switch. The switch couples control and data signals between the Audio/Visual peripheral components and one but nor both of the plurality of codecs. The switch additionally couples communication networks of the appropriate security level to each of the codecs. In this manner, a videoconferencing facility is provided for use on both secure and non-secure networks.

  5. Constant-current control method of multi-function electromagnetic transmitter.

    PubMed

    Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun

    2015-02-01

    Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.

  6. Constant-current control method of multi-function electromagnetic transmitter

    NASA Astrophysics Data System (ADS)

    Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun

    2015-02-01

    Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.

  7. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  8. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  9. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  10. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  11. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  12. 47 CFR 73.322 - FM stereophonic sound transmission standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... transmission, modulation of the carrier by audio components within the baseband range of 50 Hz to 15 kHz shall... the carrier by audio components within the audio baseband range of 23 kHz to 99 kHz shall not exceed... method described in (a), must limit the modulation of the carrier by audio components within the audio...

  13. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  14. Localization with Sparse Acoustic Sensor Network Using UAVs as Information-Seeking Data Mules

    DTIC Science & Technology

    2013-05-01

    technique to differentiate among several sources. 2.2. AoA Estimation AoA Models. The kth of NAOA AoA sensors produces an angular measurement modeled...squares sense. θ̂ = arg min φ 3∑ i=1 ( ̂τi0 − eTφ ri )2 (9) The minimization was done by gridding the one-dimensional angular space and finding the optimum...Latitude E5500 laptop running FreeBSD and custom Java applications to process and store the raw audio signals. Power Source: The laptop was powered for an

  15. 47 CFR 301.7 - Waiver of household eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and audio segments to evaluate the performance as perceived by a human observer. For subjective...Remote control may have dedicated keys to provide direct access to closed captioning and descriptive...

  16. The Impact of Audio Book on the Elderly Mental Health.

    PubMed

    Ameri, Fereshteh; Vazifeshenas, Naser; Haghparast, Abbas

    2017-01-01

    The growing elderly population calls mental health professionals to take measures concerning the treatment of the elderly mental disorders. Today in developed countries, bibliotherapy is used for the treatment of the most prevalent psychiatric disorders. Therefore, this study aimed to investigate the effects of audio book on the elderly mental health of Retirement Center of Shahid Beheshti University of Medical Sciences. This experimental study was conducted on 60 elderly people participated in 8 audio book presentation sessions, and their mental health aspects were evaluated through mental health questionnaire (SCL-90-R). Data were analyzed using SPSS 24. Data analysis revealed that the mean difference of pretest and posttest of control group is less than 5.0, so no significant difference was observed in their mental health, but this difference was significant in the experimental group (more than 5.0). Therefore, a significant improvement in mental health and its dimensions have observed in elderly people participated in audio book sessions. This therapeutic intervention was effective on mental health dimensions of paranoid ideation, psychosis, phobia, aggression, depression, interpersonal sensitivity, anxiety, obsessive-compulsive and somatic complaints. Considering the fact that our population is moving toward aging, the obtained results could be useful for policy makers and health and social planners to improve the health status of the elderly.

  17. Young children's recall and reconstruction of audio and audiovisual narratives.

    PubMed

    Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C

    1986-08-01

    It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.

  18. The Impact of Audio Book on the Elderly Mental Health

    PubMed Central

    Ameri, Fereshteh; Vazifeshenas, Naser; Haghparast, Abbas

    2017-01-01

    Introduction: The growing elderly population calls mental health professionals to take measures concerning the treatment of the elderly mental disorders. Today in developed countries, bibliotherapy is used for the treatment of the most prevalent psychiatric disorders. Therefore, this study aimed to investigate the effects of audio book on the elderly mental health of Retirement Center of Shahid Beheshti University of Medical Sciences. Methods: This experimental study was conducted on 60 elderly people participated in 8 audio book presentation sessions, and their mental health aspects were evaluated through mental health questionnaire (SCL-90-R). Data were analyzed using SPSS 24. Results: Data analysis revealed that the mean difference of pretest and posttest of control group is less than 5.0, so no significant difference was observed in their mental health, but this difference was significant in the experimental group (more than 5.0). Therefore, a significant improvement in mental health and its dimensions have observed in elderly people participated in audio book sessions. This therapeutic intervention was effective on mental health dimensions of paranoid ideation, psychosis, phobia, aggression, depression, interpersonal sensitivity, anxiety, obsessive-compulsive and somatic complaints. Conclusion: Considering the fact that our population is moving toward aging, the obtained results could be useful for policy makers and health and social planners to improve the health status of the elderly. PMID:29167723

  19. AUDIO-VISUAL TECHNIQUES IN LANGUAGE TEACHING.

    ERIC Educational Resources Information Center

    NEWCOMER, DONALD S.

    RECORDED LESSONS OF TWO TYPES ARE DISCUSSED, DISCS AND TAPES. TAPE LESSONS CAN BE MADE FROM OUTSIDE SOURCES SUCH AS RADIO, OR READ FROM A BOOK BY THE TEACHER. METHODS FOR MAKING SUCH LESSONS ARE DISCUSSED. 16MM TEACHING FILMS ARE DISCUSSED, AND SUGGESTIONS ARE GIVEN FOR THEIR USE. FOR EXAMPLE, THEY MAY BE RUN SILENTLY, WITH THE SOUND ADDED BY THE…

  20. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    ERIC Educational Resources Information Center

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  1. More than Just Playing Outside: A Self-Study on Finding My Identity as an Environmental Educator in Science Education

    ERIC Educational Resources Information Center

    Gatzke, Jenna M.; Buck, Gayle A.; Akerson, Valarie L.

    2015-01-01

    The purpose of this study was to investigate the identity conflicts I was experiencing as an environmental educator entering a doctoral program in science education. My inquiry used self-study methodology with a variety of data sources, including sixteen weeks of personal journal entries, audio-recordings of four critical friend meetings, and…

  2. Environmental impact of the MV CITA on the foreshore of Porth Hellick, Isles of Scilly.

    PubMed

    Smith, Nicola A

    2004-12-01

    The grounding of the container feeder vessel MV CITA on Newfoundland Rocks, Isles of Scilly, had an effect on the surrounding biota and benthic environment. Included in the CITA's cargo were five 40 ft containers holding pallets of polyester film used in the production of audio and visual recording tapes. The wreckage presented a minor but potentially chronic source of pollution through the delayed release of polythene film, which was left on the seabed as it was considered insufficiently valuable to warrant salvage. The polythene disintegrated and was washed upon the foreshore of Porth Hellick in minute shreds. The adjacent foreshore and two control sites within the islands were analysed using a 5-strand line and vegetation survey with 10 random quadrats within each line to determine the environmental impact of the polythene.

  3. Predicting the Overall Spatial Quality of Automotive Audio Systems

    NASA Astrophysics Data System (ADS)

    Koya, Daisuke

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.

  4. Exploring the Implementation of Steganography Protocols on Quantum Audio Signals

    NASA Astrophysics Data System (ADS)

    Chen, Kehan; Yan, Fei; Iliyasu, Abdullah M.; Zhao, Jianping

    2018-02-01

    Two quantum audio steganography (QAS) protocols are proposed, each of which manipulates or modifies the least significant qubit (LSQb) of the host quantum audio signal that is encoded as an FRQA (flexible representation of quantum audio) audio content. The first protocol (i.e. the conventional LSQb QAS protocol or simply the cLSQ stego protocol) is built on the exchanges between qubits encoding the quantum audio message and the LSQb of the amplitude information in the host quantum audio samples. In the second protocol, the embedding procedure to realize it implants information from a quantum audio message deep into the constraint-imposed most significant qubit (MSQb) of the host quantum audio samples, we refer to it as the pseudo MSQb QAS protocol or simply the pMSQ stego protocol. The cLSQ stego protocol is designed to guarantee high imperceptibility between the host quantum audio and its stego version, whereas the pMSQ stego protocol ensures that the resulting stego quantum audio signal is better immune to illicit tampering and copyright violations (a.k.a. robustness). Built on the circuit model of quantum computation, the circuit networks to execute the embedding and extraction algorithms of both QAS protocols are determined and simulation-based experiments are conducted to demonstrate their implementation. Outcomes attest that both protocols offer promising trade-offs in terms of imperceptibility and robustness.

  5. Detection of Mild Cognitive Impairment and early stage dementia with an audio-recorded cognitive scale

    PubMed Central

    Sewell, Margaret C.; Luo, Xiaodong; Neugroschl, Judith; Sano, Mary

    2014-01-01

    BACKGROUND Physicians often miss a diagnosis of Mild Cognitive Impairment (MCI) or early dementia and screening measures can be insensitive to very mild impairments. Other cognitive assessments may take too much time or be frustrating to seniors. This study examined the ability of an audio-recorded scale, developed in Australia, to detect MCI or mild Alzheimer’s disease and compared cognitive domain specific performance on the audio-recorded scale to in-person battery and common cognitive screens. METHOD Seventy-six subjects from the Mount Sinai Alzheimer’s Disease Research Center were recruited. Subjects were 75 years or older, with clinical diagnosis of AD or MCI (n=51) or normal control (n=25). Participants underwent in-person neuropsychological testing followed by testing with the Audio-recorded Cognitive Screen (ARCS). RESULTS ARCS provided better discrimination between normal and impaired elders than either the Mini-Mental Status Exam (MMSE) or the clock drawing test. The in-person battery and ARCS analogous variables were significantly correlated, most in the .4 to .7 range, including verbal memory, executive function/attention, naming, and verbal fluency. The area under the curve generated from ROC curves indicated high and equivalent discrimination for ARCS and the in-person battery (0.972 vs. 0.988; p=0.23). CONCLUSION The ARCS demonstrated better discrimination between normal controls and those with mild deficits than typical screening measures. Performance on cognitive domains within the ARCS was well correlated with the in-person battery. Completion of the ARCS was accomplished despite mild difficulty hearing the instructions even in very elderly subjects, indicating that it may be a useful measure in primary care settings. PMID:23635663

  6. Multisensory and modality specific processing of visual speech in different regions of the premotor cortex

    PubMed Central

    Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko

    2014-01-01

    Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526

  7. Revealing the ecological content of long-duration audio-recordings of the environment through clustering and visualisation.

    PubMed

    Phillips, Yvonne F; Towsey, Michael; Roe, Paul

    2018-01-01

    Audio recordings of the environment are an increasingly important technique to monitor biodiversity and ecosystem function. While the acquisition of long-duration recordings is becoming easier and cheaper, the analysis and interpretation of that audio remains a significant research area. The issue addressed in this paper is the automated reduction of environmental audio data to facilitate ecological investigations. We describe a method that first reduces environmental audio to vectors of acoustic indices, which are then clustered. This can reduce the audio data by six to eight orders of magnitude yet retain useful ecological information. We describe techniques to visualise sequences of cluster occurrence (using for example, diel plots, rose plots) that assist interpretation of environmental audio. Colour coding acoustic clusters allows months and years of audio data to be visualised in a single image. These techniques are useful in identifying and indexing the contents of long-duration audio recordings. They could also play an important role in monitoring long-term changes in species abundance brought about by habitat degradation and/or restoration.

  8. Revealing the ecological content of long-duration audio-recordings of the environment through clustering and visualisation

    PubMed Central

    Towsey, Michael; Roe, Paul

    2018-01-01

    Audio recordings of the environment are an increasingly important technique to monitor biodiversity and ecosystem function. While the acquisition of long-duration recordings is becoming easier and cheaper, the analysis and interpretation of that audio remains a significant research area. The issue addressed in this paper is the automated reduction of environmental audio data to facilitate ecological investigations. We describe a method that first reduces environmental audio to vectors of acoustic indices, which are then clustered. This can reduce the audio data by six to eight orders of magnitude yet retain useful ecological information. We describe techniques to visualise sequences of cluster occurrence (using for example, diel plots, rose plots) that assist interpretation of environmental audio. Colour coding acoustic clusters allows months and years of audio data to be visualised in a single image. These techniques are useful in identifying and indexing the contents of long-duration audio recordings. They could also play an important role in monitoring long-term changes in species abundance brought about by habitat degradation and/or restoration. PMID:29494629

  9. A preliminary categorization of end-of-life electrical and electronic equipment as secondary metal resources.

    PubMed

    Oguchi, Masahiro; Murakami, Shinsuke; Sakanakura, Hirofumi; Kida, Akiko; Kameya, Takashi

    2011-01-01

    End-of-life electrical and electronic equipment (EEE) has recently received attention as a secondary source of metals. This study examined characteristics of end-of-life EEE as secondary metal resources to consider efficient collection and metal recovery systems according to the specific metals and types of EEE. We constructed an analogy between natural resource development and metal recovery from end-of-life EEE and found that metal content and total annual amount of metal contained in each type of end-of-life EEE should be considered in secondary resource development, as well as the collectability of the end-of-life products. We then categorized 21 EEE types into five groups and discussed their potential as secondary metal resources. Refrigerators, washing machines, air conditioners, and CRT TVs were evaluated as the most important sources of common metals, and personal computers, mobile phones, and video games were evaluated as the most important sources of precious metals. Several types of small digital equipment were also identified as important sources of precious metals; however, mid-size information and communication technology (ICT) equipment (e.g., printers and fax machines) and audio/video equipment were shown to be more important as a source of a variety of less common metals. The physical collectability of each type of EEE was roughly characterized by unit size and number of end-of-life products generated annually. Current collection systems in Japan were examined and potentially appropriate collection methods were suggested for equipment types that currently have no specific collection systems in Japan, particularly for video games, notebook computers, and mid-size ICT and audio/video equipment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Effects of a theory-based audio HIV/AIDS intervention for illiterate rural females in Amhara, Ethiopia.

    PubMed

    Bogale, Gebeyehu W; Boer, Henk; Seydel, Erwin R

    2011-02-01

    In Ethiopia the level of illiteracy in rural areas is very high. In this study, we investigated the effects of an audio HIV/AIDS prevention intervention targeted at rural illiterate females. In the intervention we used social-oriented presentation formats, such as discussion between similar females and role-play. In a pretest and posttest experimental study with an intervention group (n = 210) and control group (n = 210), we investigated the effects on HIV/AIDS knowledge and social cognitions. The intervention led to significant and relevant increases in HIV/AIDS knowledge, self-efficacy, perceived vulnerability to HIV/AIDS infection, response efficacy of condoms and condom use intention. In the intervention group, self-efficacy at posttest was the main determinant of condom use intention, with also a significant contribution of vulnerability. We conclude that audio HIV/AIDS prevention interventions can play an important role in empowering rural illiterate females in the prevention of HIV/AIDS.

  11. Meet David, Our Teacher's Helper.

    ERIC Educational Resources Information Center

    Newell, William; And Others

    1984-01-01

    DAVID, Dynamic Audio Video Instructional Device, is composed of a conventional videotape recorder, a microcomputer, and a video controller, and has been successfully used for speech reading and sign language instruction with deaf students. (CL)

  12. Development of the ISS EMU Dashboard Software

    NASA Technical Reports Server (NTRS)

    Bernard, Craig; Hill, Terry R.

    2011-01-01

    The EMU (Extra-Vehicular Mobility Unit) Dashboard was developed at NASA s Johnson Space Center to aid in real-time mission support for the ISS (International Space Station) and Shuttle EMU space suit by time synchronizing down-linked video, space suit data and audio from the mission control audio loops. Once the input streams are synchronized and recorded, the data can be replayed almost instantly and has proven invaluable in understanding in-flight hardware anomalies and playing back information conveyed by the crew to missions control and the back room support. This paper will walk through the development from an engineer s idea brought to life by an intern to real time mission support and how this tool is evolving today and its challenges to support EVAs (Extra-Vehicular Activities) and human exploration in the 21st century.

  13. Holographic disk with high data transfer rate: its application to an audio response memory.

    PubMed

    Kubota, K; Ono, Y; Kondo, M; Sugama, S; Nishida, N; Sakaguchi, M

    1980-03-15

    This paper describes a memory realized with a high data transfer rate using the holographic parallel-processing function and its application to an audio response system that supplies many audio messages to many terminals simultaneously. Digitalized audio messages are recorded as tiny 1-D Fourier transform holograms on a holographic disk. A hologram recorder and a hologram reader were constructed to test and demonstrate the holographic audio response memory feasibility. Experimental results indicate the potentiality of an audio response system with a 2000-word vocabulary and 250-Mbit/sec bit transfer rate.

  14. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. 78 FR 38093 - Seventh Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-25

    ... Committee 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... 226, Audio Systems and Equipment [[Page 38094

  16. Diagnostic accuracy of sleep bruxism scoring in absence of audio-video recording: a pilot study.

    PubMed

    Carra, Maria Clotilde; Huynh, Nelly; Lavigne, Gilles J

    2015-03-01

    Based on the most recent polysomnographic (PSG) research diagnostic criteria, sleep bruxism is diagnosed when >2 rhythmic masticatory muscle activity (RMMA)/h of sleep are scored on the masseter and/or temporalis muscles. These criteria have not yet been validated for portable PSG systems. This pilot study aimed to assess the diagnostic accuracy of scoring sleep bruxism in absence of audio-video recordings. Ten subjects (mean age 24.7 ± 2.2) with a clinical diagnosis of sleep bruxism spent one night in the sleep laboratory. PSG were performed with a portable system (type 2) while audio-video was recorded. Sleep studies were scored by the same examiner three times: (1) without, (2) with, and (3) without audio-video in order to test the intra-scoring and intra-examiner reliability for RMMA scoring. The RMMA event-by-event concordance rate between scoring without audio-video and with audio-video was 68.3 %. Overall, the RMMA index was overestimated by 23.8 % without audio-video. However, the intra-class correlation coefficient (ICC) between scorings with and without audio-video was good (ICC = 0.91; p < 0.001); the intra-examiner reliability was high (ICC = 0.97; p < 0.001). The clinical diagnosis of sleep bruxism was confirmed in 8/10 subjects based on scoring without audio-video and in 6/10 subjects with audio-video. Although the absence of audio-video recording, the diagnostic accuracy of assessing RMMA with portable PSG systems appeared to remain good, supporting their use for both research and clinical purposes. However, the risk of moderate overestimation in absence of audio-video must be taken into account.

  17. Effect of audio instruction on tracking errors using a four-dimensional image-guided radiotherapy system.

    PubMed

    Nakamura, Mitsuhiro; Sawada, Akira; Mukumoto, Nobutaka; Takahashi, Kunio; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro

    2013-09-06

    The Vero4DRT (MHI-TM2000) is capable of performing X-ray image-based tracking (X-ray Tracking) that directly tracks the target or fiducial markers under continuous kV X-ray imaging. Previously, we have shown that irregular respiratory patterns increased X-ray Tracking errors. Thus, we assumed that audio instruction, which generally improves the periodicity of respiration, should reduce tracking errors. The purpose of this study was to assess the effect of audio instruction on X-ray Tracking errors. Anterior-posterior abdominal skin-surface displacements obtained from ten lung cancer patients under free breathing and simple audio instruction were used as an alternative to tumor motion in the superior-inferior direction. First, a sequential predictive model based on the Levinson-Durbin algorithm was created to estimate the future three-dimensional (3D) target position under continuous kV X-ray imaging while moving a steel ball target of 9.5 mm in diameter. After creating the predictive model, the future 3D target position was sequentially calculated from the current and past 3D target positions based on the predictive model every 70 ms under continuous kV X-ray imaging. Simultaneously, the system controller of the Vero4DRT calculated the corresponding pan and tilt rotational angles of the gimbaled X-ray head, which then adjusted its orientation to the target. The calculated and current rotational angles of the gimbaled X-ray head were recorded every 5 ms. The target position measured by the laser displacement gauge was synchronously recorded every 10 msec. Total tracking system errors (ET) were compared between free breathing and audio instruction. Audio instruction significantly improved breathing regularity (p < 0.01). The mean ± standard deviation of the 95th percentile of ET (E95T ) was 1.7 ± 0.5 mm (range: 1.1-2.6mm) under free breathing (E95T,FB) and 1.9 ± 0.5 mm (range: 1.2-2.7 mm) under audio instruction (E95T,AI). E95T,AI was larger than E95T,FB for five patients; no significant difference was found between E95T,FB and E95T,AI (p = 0.21). Correlation analysis revealed that the rapid respiratory velocity significantly increased E95T. Although audio instruction improved breathing regularity, it also increased the respiratory velocity, which did not necessarily reduce tracking errors.

  18. Effect of audio instruction on tracking errors using a four‐dimensional image‐guided radiotherapy system

    PubMed Central

    Sawada, Akira; Mukumoto, Nobutaka; Takahashi, Kunio; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro

    2013-01-01

    The Vero4DRT (MHI‐TM2000) is capable of performing X‐ray image‐based tracking (X‐ray Tracking) that directly tracks the target or fiducial markers under continuous kV X‐ray imaging. Previously, we have shown that irregular respiratory patterns increased X‐ray Tracking errors. Thus, we assumed that audio instruction, which generally improves the periodicity of respiration, should reduce tracking errors. The purpose of this study was to assess the effect of audio instruction on X‐ray Tracking errors. Anterior‐posterior abdominal skin‐surface displacements obtained from ten lung cancer patients under free breathing and simple audio instruction were used as an alternative to tumor motion in the superior‐inferior direction. First, a sequential predictive model based on the Levinson‐Durbin algorithm was created to estimate the future three‐dimensional (3D) target position under continuous kV X‐ray imaging while moving a steel ball target of 9.5 mm in diameter. After creating the predictive model, the future 3D target position was sequentially calculated from the current and past 3D target positions based on the predictive model every 70 ms under continuous kV X‐ray imaging. Simultaneously, the system controller of the Vero4DRT calculated the corresponding pan and tilt rotational angles of the gimbaled X‐ray head, which then adjusted its orientation to the target. The calculated and current rotational angles of the gimbaled X‐ray head were recorded every 5 ms. The target position measured by the laser displacement gauge was synchronously recorded every 10 msec. Total tracking system errors (ET) were compared between free breathing and audio instruction. Audio instruction significantly improved breathing regularity (p < 0.01). The mean ± standard deviation of the 95th percentile of ET (E95T) was 1.7 ± 0.5 mm (range: 1.1–2.6 mm) under free breathing (E95T,FB) and 1.9 ± 0.5 mm (range: 1.2–2.7 mm) under audio instruction (E95T,AI). E95T,AI was larger than E95T,FB for five patients; no significant difference was found between E95T,FB and ET,AI95(p = 0.21). Correlation analysis revealed that the rapid respiratory velocity significantly increased E95T. Although audio instruction improved breathing regularity, it also increased the respiratory velocity, which did not necessarily reduce tracking errors. PACS number: 87.55.ne, 87.57.N‐, 87.59.C‐, PMID:24036880

  19. Robustness of a compact endfire personal audio system against scattering effects (L).

    PubMed

    Tu, Zhen; Lu, Jing; Qiu, Xiaojun

    2016-10-01

    Compact loudspeaker arrays have wide potential applications as portable personal audio systems that can project sound energy to specified regions. It is meaningful to investigate the scattering effects on the array performance since the scattering of the users' heads is inevitable in practice. A five-channel compact endfire array is established and the regularized acoustic contrast control method is evaluated for the scenarios of one moving listener and one listener fixed in the bright zone while another listener moves along the evaluation region. Both simulations and experiments verify that the scattering has limited influence on the directivity of the endfire array.

  20. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  1. 47 CFR 73.403 - Digital audio broadcasting service requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... programming stream at no direct charge to listeners. In addition, a broadcast radio station must simulcast its analog audio programming on one of its digital audio programming streams. The DAB audio programming... analog programming service currently provided to listeners. (b) Emergency information. The emergency...

  2. High-Fidelity Piezoelectric Audio Device

    NASA Technical Reports Server (NTRS)

    Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.

    2003-01-01

    ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.

  3. Communication competence, self-care behaviors and glucose control in patients with type 2 diabetes.

    PubMed

    Parchman, Michael L; Flannagan, Dorothy; Ferrer, Robert L; Matamoras, Mike

    2009-10-01

    To examine the relationship between physician communication competence and A1c control among Hispanics and non-Hispanics seen in primary care practices. Observational. Direct observation and audio-recording of patient-physician encounters by 155 Hispanic and non-Hispanic white patients seen by 40 physicians in 20 different primary care clinics. Audio-recordings were transcribed and coded to derive an overall communication competence score for the physician. An exit survey was administered to each patient to assess self-care activities and their medical record was abstracted for the most recent glycosylated hemoglobin (A1c) level. Higher levels of communication competence were associated with lower levels of A1c for Hispanics, but not non-Hispanic white patients. Although communication competence was associated with better self-reported diet behaviors, diet was not associated with A1c control. Across all patients, higher levels of communication competence were associated with improved A1c control after controlling for age, ethnicity and diet adherence. Physician's communication competence may be more important for promoting clinical success in disadvantaged patients. Acquisition of communication competence skills may be an important component in interventions to eliminate Hispanic disparities in glucose control. Published by Elsevier Ireland Ltd.

  4. Speech to Text Translation for Malay Language

    NASA Astrophysics Data System (ADS)

    Al-khulaidi, Rami Ali; Akmeliawati, Rini

    2017-11-01

    The speech recognition system is a front end and a back-end process that receives an audio signal uttered by a speaker and converts it into a text transcription. The speech system can be used in several fields including: therapeutic technology, education, social robotics and computer entertainments. In most cases in control tasks, which is the purpose of proposing our system, wherein the speed of performance and response concern as the system should integrate with other controlling platforms such as in voiced controlled robots. Therefore, the need for flexible platforms, that can be easily edited to jibe with functionality of the surroundings, came to the scene; unlike other software programs that require recording audios and multiple training for every entry such as MATLAB and Phoenix. In this paper, a speech recognition system for Malay language is implemented using Microsoft Visual Studio C#. 90 (ninety) Malay phrases were tested by 10 (ten) speakers from both genders in different contexts. The result shows that the overall accuracy (calculated from Confusion Matrix) is satisfactory as it is 92.69%.

  5. Cardiovascular fitness strengthening using portable device.

    PubMed

    Alqudah, Hamzah; Kai Cao; Tao Zhang; Haddad, Azzam; Su, Steven; Celler, Branko; Nguyen, Hung T

    2016-08-01

    The paper describes a reliable and valid Portable Exercise Monitoring system developed using TI eZ430-Chronos watch, which can control the exercise intensity through audio stimulation in order to increase the Cardiovascular fitness strengthening.

  6. Design Issues for Producing Effective Multimedia Presentations.

    ERIC Educational Resources Information Center

    Mason, Lisa D.

    1997-01-01

    Discusses design issues for interactive multimedia. Notes that technical communication instructors must consider navigational aids, the degree of control a user should have, audio cues, color and typographical elements, visual elements, and copyright issues. (RS)

  7. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.

  8. Validation of a digital audio recording method for the objective assessment of cough in the horse.

    PubMed

    Duz, M; Whittaker, A G; Love, S; Parkin, T D H; Hughes, K J

    2010-10-01

    To validate the use of digital audio recording and analysis for quantification of coughing in horses. Part A: Nine simultaneous digital audio and video recordings were collected individually from seven stabled horses over a 1 h period using a digital audio recorder attached to the halter. Audio files were analysed using audio analysis software. Video and audio recordings were analysed for cough count and timing by two blinded operators on two occasions using a randomised study design for determination of intra-operator and inter-operator agreement. Part B: Seventy-eight hours of audio recordings obtained from nine horses were analysed once by two blinded operators to assess inter-operator repeatability on a larger sample. Part A: There was complete agreement between audio and video analyses and inter- and intra-operator analyses. Part B: There was >97% agreement between operators on number and timing of 727 coughs recorded over 78 h. The results of this study suggest that the cough monitor methodology used has excellent sensitivity and specificity for the objective assessment of cough in horses and intra- and inter-operator variability of recorded coughs is minimal. Crown Copyright 2010. Published by Elsevier India Pvt Ltd. All rights reserved.

  9. JPRS Report - China.

    DTIC Science & Technology

    1989-11-27

    drive against pornography, and it has also achieved new breakthroughs and progress in eradicating porno - graphic materials in certain localities...September, more than 45,000 law enforcement personnel in the province made more than 5,900 inspections of bookstores and audio and video shops and stalls...on 3 October. Second, the sources of Shishi City’s illegal and pornographic video - tapes have been ascertained. Third, the channels through which

  10. Pathfinder. Volume 9, Number 2, March/April 2011

    DTIC Science & Technology

    2011-03-01

    vides audio, video, desktop sharing and chat.” The platform offers a real-time, Web- based presentation tool to create information and general...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or

  11. Increased sensorimotor network activity in DYT1 dystonia: a functional imaging study

    PubMed Central

    Argyelan, Miklos; Habeck, Christian; Ghilardi, M. Felice; Fitzpatrick, Toni; Dhawan, Vijay; Pourfar, Michael; Bressman, Susan B.; Eidelberg, David

    2010-01-01

    Neurophysiological studies have provided evidence of primary motor cortex hyperexcitability in primary dystonia, but several functional imaging studies suggest otherwise. To address this issue, we measured sensorimotor activation at both the regional and network levels in carriers of the DYT1 dystonia mutation and in control subjects. We used 15Oxygen-labelled water and positron emission tomography to scan nine manifesting DYT1 carriers, 10 non-manifesting DYT1 carriers and 12 age-matched controls while they performed a kinematically controlled motor task; they were also scanned in a non-motor audio-visual control condition. Within- and between-group contrasts were analysed with statistical parametric mapping. For network analysis, we first identified a normal motor-related activation pattern in a set of 39 motor and audio-visual scans acquired in an independent cohort of 18 healthy volunteer subjects. The expression of this pattern was prospectively quantified in the motor and control scans acquired in each of the gene carriers and controls. Network values for the three groups were compared with ANOVA and post hoc contrasts. Voxel-wise comparison of DYT1 carriers and controls revealed abnormally increased motor activation responses in the former group (P < 0.05, corrected; statistical parametric mapping), localized to the sensorimotor cortex, dorsal premotor cortex, supplementary motor area and the inferior parietal cortex. Network analysis of the normative derivation cohort revealed a significant normal motor-related activation pattern topography (P < 0.0001) characterized by covarying neural activity in the sensorimotor cortex, dorsal premotor cortex, supplementary motor area and cerebellum. In the study cohort, normal motor-related activation pattern expression measured during movement was abnormally elevated in the manifesting gene carriers (P < 0.001) but not in their non-manifesting counterparts. In contrast, in the non-motor control condition, abnormal increases in network activity were present in both groups of gene carriers (P < 0.001). In this condition, normal motor-related activation pattern expression in non-manifesting carriers was greater than in controls, but lower than in affected carriers. In the latter group, measures of normal motor-related activation pattern expression in the audio-visual condition correlated with independent dystonia clinical ratings (r = 0.70, P = 0.04). These findings confirm that overexcitability of the sensorimotor system is a robust feature of dystonia. The presence of elevated normal motor-related activation pattern expression in the non-motor condition suggests that abnormal integration of audio-visual input with sensorimotor network activity is an important trait feature of this disorder. Lastly, quantification of normal motor-related activation pattern expression in individual cases may have utility as an objective descriptor of therapeutic response in trials of new treatments for dystonia and related disorders. PMID:20207699

  12. 47 CFR 73.9005 - Compliance requirements for covered demodulator products: Audio.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... products: Audio. 73.9005 Section 73.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED....9005 Compliance requirements for covered demodulator products: Audio. Except as otherwise provided in §§ 73.9003(a) or 73.9004(a), covered demodulator products shall not output the audio portions of...

  13. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  14. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  15. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  16. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  17. 36 CFR 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Audio disturbances. 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  18. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  19. 50 CFR 27.72 - Audio equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 9 2012-10-01 2012-10-01 false Audio equipment. 27.72 Section 27.72 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR (CONTINUED) THE... Audio equipment. The operation or use of audio devices including radios, recording and playback devices...

  20. 47 CFR 87.483 - Audio visual warning systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Audio visual warning systems. 87.483 Section 87... AVIATION SERVICES Stations in the Radiodetermination Service § 87.483 Audio visual warning systems. An audio visual warning system (AVWS) is a radar-based obstacle avoidance system. AVWS activates...

  1. Quality Control for Interviews to Obtain Dietary Recalls from Children for Research Studies

    PubMed Central

    SHAFFER, NICOLE M.; THOMPSON, WILLIAM O.; BAGLIO, MICHELLE L.; GUINN, CAROLINE H.; FRYE, FRANCESCA H. A.

    2005-01-01

    Quality control is an important aspect of a study because the quality of data collected provides a foundation for the conclusions drawn from the study. For studies that include interviews, establishing quality control for interviews is critical in ascertaining whether interviews are conducted according to protocol. Despite the importance of quality control for interviews, few studies adequately document the quality control procedures used during data collection. This article reviews quality control for interviews and describes methods and results of quality control for interviews from two of our studies regarding the accuracy of children's dietary recalls; the focus is on quality control regarding interviewer performance during the interview, and examples are provided from studies with children. For our two studies, every interview was audio recorded and transcribed. The audio recording and typed transcript from one interview conducted by each research dietitian either weekly or daily were randomly selected and reviewed by another research dietitian, who completed a standardized quality control for interviews checklist. Major strengths of the methods of quality control for interviews in our two studies include: (a) interviews obtained for data collection were randomly selected for quality control for interviews, and (b) quality control for interviews was assessed on a regular basis throughout data collection. The methods of quality control for interviews described may help researchers design appropriate methods of quality control for interviews for future studies. PMID:15389417

  2. Semantic Context Detection Using Audio Event Fusion

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Ta; Cheng, Wen-Huang; Wu, Ja-Ling

    2006-12-01

    Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  3. Effect of Audio Coaching on Correlation of Abdominal Displacement With Lung Tumor Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, Mitsuhiro; Narita, Yuichiro; Matsuo, Yukinori

    2009-10-01

    Purpose: To assess the effect of audio coaching on the time-dependent behavior of the correlation between abdominal motion and lung tumor motion and the corresponding lung tumor position mismatches. Methods and Materials: Six patients who had a lung tumor with a motion range >8 mm were enrolled in the present study. Breathing-synchronized fluoroscopy was performed initially without audio coaching, followed by fluoroscopy with recorded audio coaching for multiple days. Two different measurements, anteroposterior abdominal displacement using the real-time positioning management system and superoinferior (SI) lung tumor motion by X-ray fluoroscopy, were performed simultaneously. Their sequential images were recorded using onemore » display system. The lung tumor position was automatically detected with a template matching technique. The relationship between the abdominal and lung tumor motion was analyzed with and without audio coaching. Results: The mean SI tumor displacement was 10.4 mm without audio coaching and increased to 23.0 mm with audio coaching (p < .01). The correlation coefficients ranged from 0.89 to 0.97 with free breathing. Applying audio coaching, the correlation coefficients improved significantly (range, 0.93-0.99; p < .01), and the SI lung tumor position mismatches became larger in 75% of all sessions. Conclusion: Audio coaching served to increase the degree of correlation and make it more reproducible. In addition, the phase shifts between tumor motion and abdominal displacement were improved; however, all patients breathed more deeply, and the SI lung tumor position mismatches became slightly larger with audio coaching than without audio coaching.« less

  4. New Communications Technology and Distance Education: Implications for Commonwealth Countries of the South. Papers on Information Technology No. 239.

    ERIC Educational Resources Information Center

    Bates, A. W.

    This review of the technical possibilities of audio, television, computing, and combination media addresses the main factors influencing decisions about each technology's suitability for distance teaching, including access, costs, symbolic representation, student control, teacher control, existing structures, learning skills to be developed, and…

  5. Audio Recording for Independent Confirmation of Clinical Assessments in Generalized Anxiety Disorder.

    PubMed

    Targum, Steven D; Murphy, Christopher; Khan, Jibran; Zumpano, Laura; Whitlock, Mark; Simen, Arthur A; Binneman, Brendon

    2018-04-01

    Objective : The assessment of patients with generalized anxiety disorder (GAD) to deteremine whether a medication intervention is necessary is not always clear and might benefit from a second opinion. However, second opinions are time consuming, expensive, and not practical in most settings. We obtained independent, second opinion reviews of the primary clinician's assessment via audio-digital recording. Design : An audio-digital recording of key site-based assessments was used to generate site-independent "dual" reviews of the clinical presentation, symptom severity, and medication requirements of patients with GAD as part of the screening procedures for a clinical trial (ClinicalTrials.gov: NCT02310568). Results : Site-independent reviewers affirmed the diagnosis, symptom severity metrics, and treatment requirements of 90 moderately ill patients with GAD. The patients endorsed excessive worry that was hard to control and essentially all six of the associated DSM-IV-TR anxiety symptoms. The Hamilton Rating Scale for Anxiety scores revealed moderately severe anxiety with a high Pearson's correlation ( r =0.852) between site-based and independent raters and minimal scoring discordance on each scale item. Based upon their independent reviews, these "second" opinions confirmed that these GAD patients warranted a new medication intervention. Thirty patients (33.3%) reported a previous history of a major depressive episode (MDE) and had significantly more depressive symptoms than patients without a history of MDE. Conclusion : The audio-digital recording method provides a useful second opinion that can affirm the need for a different treatment intervention in these anxious patients. A second live assessment would have required additional clinic time and added patient burden. The audio-digital recording method is less burdensome than live second opinion assessments and might have utility in both research and clinical practice settings.

  6. Overdrive and Edge as Refiners of "Belting"?: An Empirical Study Qualifying and Categorizing "Belting" Based on Audio Perception, Laryngostroboscopic Imaging, Acoustics, LTAS, and EGG.

    PubMed

    McGlashan, Julian; Thuesen, Mathias Aaen; Sadolin, Cathrine

    2017-05-01

    We aimed to study the categorizations "Overdrive" and "Edge" from the pedagogical method Complete Vocal Technique as refiners of the often ill-defined concept of "belting" by means of audio perception, laryngostroboscopic imaging, acoustics, long-term average spectrum (LTAS), and electroglottography (EGG). This is a case-control study. Twenty singers were recorded singing sustained vowels in a "belting" quality refined by audio perception as "Overdrive" and "Edge." Two studies were performed: (1) a laryngostroboscopic examination using a videonasoendoscopic camera system (Olympus) and the Laryngostrobe program (Laryngograph); (2) a simultaneous recording of the EGG and acoustic signals using Speech Studio (Laryngograph). The images were analyzed based on consensus agreement. Statistical analysis of the acoustic, LTAS, and EGG parameters was undertaken using the Student paired t test. The two modes of singing determined by audio perception have visibly different laryngeal gestures: Edge has a more constricted setting than that of Overdrive, where the ventricular folds seem to cover more of the vocal folds, the aryepiglottic folds show a sharper edge in Edge, and the cuneiform cartilages are rolled in anteromedially. LTAS analysis shows a statistical difference, particularly after the ninth harmonic, with a coinciding first formant. The combined group showed statistical differences in shimmer, harmonics-to-noise ratio, normalized noise energy, and mean sound pressure level (P ≤ 0.05). "Belting" sounds can be categorized using audio perception into two modes of singing: "Overdrive" and "Edge." This study demonstrates consistent visibly different laryngeal gestures between these modes and with some correspondingly significant differences in LTAS, EGG, and acoustic measures. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. A Novel Method for Real-Time Audio Recording With Intraoperative Video.

    PubMed

    Sugamoto, Yuji; Hamamoto, Yasuyoshi; Kimura, Masayuki; Fukunaga, Toru; Tasaki, Kentaro; Asai, Yo; Takeshita, Nobuyoshi; Maruyama, Tetsuro; Hosokawa, Takashi; Tamachi, Tomohide; Aoyama, Hiromichi; Matsubara, Hisahiro

    2015-01-01

    Although laparoscopic surgery has become widespread, effective and efficient education in laparoscopic surgery is difficult. Instructive laparoscopy videos with appropriate annotations are ideal for initial training in laparoscopic surgery; however, the method we use at our institution for creating laparoscopy videos with audio is not generalized, and there have been no detailed explanations of any such method. Our objectives were to demonstrate the feasibility of low-cost simple methods for recording surgical videos with audio and to perform a preliminary safety evaluation when obtaining these recordings during operations. We devised a method for the synchronous recording of surgical video with real-time audio in which we connected an amplifier and a wireless microphone to an existing endoscopy system and its equipped video-recording device. We tested this system in 209 cases of laparoscopic surgery in operating rooms between August 2010 and July 2011 and prospectively investigated the results of the audiovisual recording method and examined intraoperative problems. Numazu City Hospital in Numazu city, Japan. Surgeons, instrument nurses, and medical engineers. In all cases, the synchronous input of audio and video was possible. The recording system did not cause any inconvenience to the surgeon, assistants, instrument nurse, sterilized equipment, or electrical medical equipment. Statistically significant differences were not observed between the audiovisual group and control group regarding the operating time, which had been divided into 2 slots-performed by the instructors or by trainees (p > 0.05). This recording method is feasible and considerably safe while posing minimal difficulty in terms of technology, time, and expense. We recommend this method for both surgical trainees who wish to acquire surgical skills effectively and medical instructors who wish to teach surgical skills effectively. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  9. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  10. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  11. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  12. 36 CFR § 1002.12 - Audio disturbances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Audio disturbances. § 1002.12... RECREATION § 1002.12 Audio disturbances. (a) The following are prohibited: (1) Operating motorized equipment or machinery such as an electric generating plant, motor vehicle, motorized toy, or an audio device...

  13. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  14. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  15. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  16. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  17. ENERGY STAR Certified Audio Video

    EPA Pesticide Factsheets

    Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of May 1, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/index.cfm?c=audio_dvd.pr_crit_audio_dvd

  18. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  19. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  20. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  1. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  2. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  3. 47 CFR 10.520 - Common audio attention signal.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  4. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...

  5. 36 CFR 2.12 - Audio disturbances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Audio disturbances. 2.12... RESOURCE PROTECTION, PUBLIC USE AND RECREATION § 2.12 Audio disturbances. (a) The following are prohibited..., motorized toy, or an audio device, such as a radio, television set, tape deck or musical instrument, in a...

  6. Controlling Within-Field Sheep Movement Using Virtual Fencing.

    PubMed

    Marini, Danila; Llewellyn, Rick; Belson, Sue; Lee, Caroline

    2018-02-26

    Virtual fencing has the potential to greatly improve livestock movement, grazing efficiency, and land management by farmers; however, relatively little work has been done to test the potential of virtual fencing with sheep. Commercial dog training equipment, comprising of a collar and GPS hand-held unit were used to implement a virtual fence in a commercial setting. Six, 5-6 year-old Merino wethers, which were naïve to virtual fencing were GPS tracked for their use of a paddock (80 × 20 m) throughout the experiment. The virtual fence was effective at preventing a small group of sheep from entering the exclusion zone. The probability of a sheep receiving an electrical stimulus following an audio cue was low (19%), and declined over the testing period. It took an average of eight interactions with the fence for an association to be made between the audio and stimulus cue, with all of the animals responding to the audio alone by the third day. Following the removal of the virtual fence, sheep were willing to cross the previous location of the virtual fence after 30 min of being in the paddock. This is an important aspect in the implementation of virtual fencing as a grazing management tool and further enforces that the sheep in this study were able to associate the audio with the virtual fence and not the physical location itself.

  7. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis.

    PubMed

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control.

  8. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    PubMed Central

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  9. Effect of Spinal Manipulative Therapy on the Singing Voice.

    PubMed

    Fachinatto, Ana Paula A; Duprat, André de Campos; Silva, Marta Andrada E; Bracher, Eduardo Sawaya Botelho; Benedicto, Camila de Carvalho; Luz, Victor Botta Colangelo; Nogueira, Maruan Nogueira; Fonseca, Beatriz Suster Gomes

    2015-09-01

    This study investigated the effect of spinal manipulative therapy (SMT) on the singing voice of male individuals. Randomized, controlled, case-crossover trial. Twenty-nine subjects were selected among male members of the Heralds of the Gospel. This association was chosen because it is a group of persons with similar singing activities. Participants were randomly assigned to two groups: (A) chiropractic SMT procedure and (B) nontherapeutic transcutaneous electrical nerve stimulation (TENS) procedure. Recordings of the singing voice of each participant were taken immediately before and after the procedures. After a 14-day period, procedures were switched between groups: participants who underwent SMT on the first day were subjected to TENS and vice versa. Recordings were subjected to perceptual audio and acoustic evaluations. The same recording segment of each participant was selected. Perceptual audio evaluation was performed by a specialist panel (SP). Recordings of each participant were randomly presented thus making the SP blind to intervention type and recording session (before/after intervention). Recordings compiled in a randomized order were also subjected to acoustic evaluation. No differences in the quality of the singing on perceptual audio evaluation were observed between TENS and SMT. No differences in the quality of the singing voice of asymptomatic male singers were observed on perceptual audio evaluation or acoustic evaluation after a single spinal manipulative intervention of the thoracic and cervical spine. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Frequency and time domain three-dimensional inversion of electromagnetic data for a grounded-wire source

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul

    2015-01-01

    We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.

  11. Sounding ruins: reflections on the production of an 'audio drift'.

    PubMed

    Gallagher, Michael

    2015-07-01

    This article is about the use of audio media in researching places, which I term 'audio geography'. The article narrates some episodes from the production of an 'audio drift', an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners' attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies.

  12. Sounding ruins: reflections on the production of an ‘audio drift’

    PubMed Central

    Gallagher, Michael

    2014-01-01

    This article is about the use of audio media in researching places, which I term ‘audio geography’. The article narrates some episodes from the production of an ‘audio drift’, an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners’ attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies. PMID:29708107

  13. DETECTOR FOR MODULATED AND UNMODULATED SIGNALS

    DOEpatents

    Patterson, H.H.; Webber, G.H.

    1959-08-25

    An r-f signal-detecting device is described, which is embodied in a compact coaxial circuit principally comprising a detecting crystal diode and a modulating crystal diode connected in parallel. Incoming modulated r-f signals are demodulated by the detecting crystal diode to furnish an audio input to an audio amplifier. The detecting diode will not, however, produce an audio signal from an unmodulated r-f signal. In order that unmodulated signals may be detected, such incoming signals have a locally produced audio signal superimposed on them at the modulating crystal diode and then the"induced or artificially modulated" signal is reflected toward the detecting diode which in the process of demodulation produces an audio signal for the audio amplifier.

  14. Speech information retrieval: a review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Henry, Michael J.

    Audio is an information-rich component of multimedia. Information can be extracted from audio in a number of different ways, and thus there are several established audio signal analysis research fields. These fields include speech recognition, speaker recognition, audio segmentation and classification, and audio finger-printing. The information that can be extracted from tools and methods developed in these fields can greatly enhance multimedia systems. In this paper, we present the current state of research in each of the major audio analysis fields. The goal is to introduce enough back-ground for someone new in the field to quickly gain high-level understanding andmore » to provide direction for further study.« less

  15. Audio-visual presentation of information for informed consent for participation in clinical trials.

    PubMed

    Synnot, Anneliese; Ryan, Rebecca; Prictor, Megan; Fetherstonhaugh, Deirdre; Parker, Barbara

    2014-05-09

    Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented, for example, on the Internet or on DVD) are one such method. We updated a 2008 review of the effects of these interventions for informed consent for trial participation. To assess the effects of audio-visual information interventions regarding informed consent compared with standard information or placebo audio-visual interventions regarding informed consent for potential clinical trial participants, in terms of their understanding, satisfaction, willingness to participate, and anxiety or other psychological distress. We searched: the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 6, 2012; MEDLINE (OvidSP) (1946 to 13 June 2012); EMBASE (OvidSP) (1947 to 12 June 2012); PsycINFO (OvidSP) (1806 to June week 1 2012); CINAHL (EbscoHOST) (1981 to 27 June 2012); Current Contents (OvidSP) (1993 Week 27 to 2012 Week 26); and ERIC (Proquest) (searched 27 June 2012). We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. We included randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or verbal information), with standard forms of information provision or placebo audio-visual information, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to consider participating in a real or hypothetical clinical study. (In the earlier version of this review we only included studies evaluating informed consent interventions for real studies). Two authors independently assessed studies for inclusion and extracted data. We synthesised the findings using meta-analysis, where possible, and narrative synthesis of results. We assessed the risk of bias of individual studies and considered the impact of the quality of the overall evidence on the strength of the results. We included 16 studies involving data from 1884 participants. Nine studies included participants considering real clinical trials, and eight included participants considering hypothetical clinical trials, with one including both. All studies were conducted in high-income countries.There is still much uncertainty about the effect of audio-visual informed consent interventions on a range of patient outcomes. However, when considered across comparisons, we found low to very low quality evidence that such interventions may slightly improve knowledge or understanding of the parent trial, but may make little or no difference to rate of participation or willingness to participate. Audio-visual presentation of informed consent may improve participant satisfaction with the consent information provided. However its effect on satisfaction with other aspects of the process is not clear. There is insufficient evidence to draw conclusions about anxiety arising from audio-visual informed consent. We found conflicting, very low quality evidence about whether audio-visual interventions took more or less time to administer. No study measured researcher satisfaction with the informed consent process, nor ease of use.The evidence from real clinical trials was rated as low quality for most outcomes, and for hypothetical studies, very low. We note, however, that this was in large part due to poor study reporting, the hypothetical nature of some studies and low participant numbers, rather than inconsistent results between studies or confirmed poor trial quality. We do not believe that any studies were funded by organisations with a vested interest in the results. The value of audio-visual interventions as a tool for helping to enhance the informed consent process for people considering participating in clinical trials remains largely unclear, although trends are emerging with regard to improvements in knowledge and satisfaction. Many relevant outcomes have not been evaluated in randomised trials. Triallists should continue to explore innovative methods of providing information to potential trial participants during the informed consent process, mindful of the range of outcomes that the intervention should be designed to achieve, and balancing the resource implications of intervention development and delivery against the purported benefits of any intervention.More trials, adhering to CONSORT standards, and conducted in settings and populations underserved in this review, i.e. low- and middle-income countries and people with low literacy, would strengthen the results of this review and broaden its applicability. Assessing process measures, such as time taken to administer the intervention and researcher satisfaction, would inform the implementation of audio-visual consent materials.

  16. Storyboard Development for Interactive Multimedia Training.

    ERIC Educational Resources Information Center

    Orr, Kay L.; And Others

    1994-01-01

    Discusses procedures for storyboard development and provides guidelines for designing interactive multimedia courseware, including interactivity, learner control, feedback, visual elements, motion video, graphics/animation, text, audio, and programming. A topical bibliography that lists 98 items is included. (LRW)

  17. A digital audio/video interleaving system. [for Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Richards, R. W.

    1978-01-01

    A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.

  18. Characteristics of audio and sub-audio telluric signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Telford, W.M.

    1977-06-01

    Telluric current measurements in the audio and sub-audio frequency range, made in various parts of Canada and South America over the past four years, indicate that the signal amplitude is relatively uniform over 6 to 8 midday hours (LMT) except in Chile and that the signal anisotropy is reasonably constant in azimuth.

  19. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  20. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  1. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  2. 43 CFR 8365.2-2 - Audio devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Audio devices. 8365.2-2 Section 8365.2-2..., DEPARTMENT OF THE INTERIOR RECREATION PROGRAMS VISITOR SERVICES Rules of Conduct § 8365.2-2 Audio devices. On... audio device such as a radio, television, musical instrument, or other noise producing device or...

  3. 78 FR 18416 - Sixth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... 226, Audio Systems and Equipment. DATES: The meeting will be held April 15-17, 2013 from 9:00 a.m.-5...

  4. Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study

    ERIC Educational Resources Information Center

    Romero-Fresco, Pablo; Fryer, Louise

    2013-01-01

    Introduction: Time constraints limit the quantity and type of information conveyed in audio description (AD) for films, in particular the cinematic aspects. Inspired by introductory notes for theatre AD, this study developed audio introductions (AIs) for "Slumdog Millionaire" and "Man on Wire." Each AI comprised 10 minutes of…

  5. Audio-Vision: Audio-Visual Interaction in Desktop Multimedia.

    ERIC Educational Resources Information Center

    Daniels, Lee

    Although sophisticated multimedia authoring applications are now available to amateur programmers, the use of audio in of these programs has been inadequate. Due to the lack of research in the use of audio in instruction, there are few resources to assist the multimedia producer in using sound effectively and efficiently. This paper addresses the…

  6. Audio Frequency Analysis in Mobile Phones

    ERIC Educational Resources Information Center

    Aguilar, Horacio Munguía

    2016-01-01

    A new experiment using mobile phones is proposed in which its audio frequency response is analyzed using the audio port for inputting external signal and getting a measurable output. This experiment shows how the limited audio bandwidth used in mobile telephony is the main cause of the poor speech quality in this service. A brief discussion is…

  7. A Longitudinal, Quantitative Study of Student Attitudes towards Audio Feedback for Assessment

    ERIC Educational Resources Information Center

    Parkes, Mitchell; Fletcher, Peter

    2017-01-01

    This paper reports on the findings of a three-year longitudinal study investigating the experiences of postgraduate level students who were provided with audio feedback for their assessment. Results indicated that students positively received audio feedback. Overall, students indicated a preference for audio feedback over written feedback. No…

  8. Audio-Tutorial Instruction: A Strategy For Teaching Introductory College Geology.

    ERIC Educational Resources Information Center

    Fenner, Peter; Andrews, Ted F.

    The rationale of audio-tutorial instruction is discussed, and the history and development of the audio-tutorial botany program at Purdue University is described. Audio-tutorial programs in geology at eleven colleges and one school are described, illustrating several ways in which programs have been developed and integrated into courses. Programs…

  9. Informal interpreting in general practice: Are interpreters' roles related to perceived control, trust, and satisfaction?

    PubMed

    Zendedel, Rena; Schouten, Barbara C; van Weert, Julia C M; van den Putte, Bas

    2018-06-01

    The aim of this observational study was twofold. First, we examined how often and which roles informal interpreters performed during consultations between Turkish-Dutch migrant patients and general practitioners (GPs). Second, relations between these roles and patients' and GPs' perceived control, trust in informal interpreters and satisfaction with the consultation were assessed. A coding instrument was developed to quantitatively code informal interpreters' roles from transcripts of 84 audio-recorded interpreter-mediated consultations in general practice. Patients' and GPs' perceived control, trust and satisfaction were assessed in a post consultation questionnaire. Informal interpreters most often performed the conduit role (almost 25% of all coded utterances), and also frequently acted as replacers and excluders of patients and GPs by asking and answering questions on their own behalf, and by ignoring and omitting patients' and GPs' utterances. The role of information source was negatively related to patients' trust and the role of GP excluder was negatively related to patients' perceived control. Patients and GPs are possibly insufficiently aware of the performed roles of informal interpreters, as these were barely related to patients' and GPs' perceived trust, control and satisfaction. Patients and GPs should be educated about the possible negative consequences of informal interpreting. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Radio frequency analog electronics based on carbon nanotube transistors

    PubMed Central

    Kocabas, Coskun; Kim, Hoon-sik; Banks, Tony; Rogers, John A.; Pesetski, Aaron A.; Baumgardner, James E.; Krishnaswamy, S. V.; Zhang, Hong

    2008-01-01

    The potential to exploit single-walled carbon nanotubes (SWNTs) in advanced electronics represents a continuing, major source of interest in these materials. However, scalable integration of SWNTs into circuits is challenging because of difficulties in controlling the geometries, spatial positions, and electronic properties of individual tubes. We have implemented solutions to some of these challenges to yield radio frequency (RF) SWNT analog electronic devices, such as narrow band amplifiers operating in the VHF frequency band with power gains as high as 14 dB. As a demonstration, we fabricated nanotube transistor radios, in which SWNT devices provide all of the key functions, including resonant antennas, fixed RF amplifiers, RF mixers, and audio amplifiers. These results represent important first steps to practical implementation of SWNTs in high-speed analog circuits. Comparison studies indicate certain performance advantages over silicon and capabilities that complement those in existing compound semiconductor technologies. PMID:18227509

  11. Predictive motor control of sensory dynamics in Auditory Active Sensing

    PubMed Central

    Morillon, Benjamin; Hackett, Troy A.; Kajikawa, Yoshinao; Schroeder, Charles E.

    2016-01-01

    Neuronal oscillations present potential physiological substrates for brain operations that require temporal prediction. We review this idea in the context of auditory perception. Using speech as an exemplar, we illustrate how hierarchically organized oscillations can be used to parse and encode complex input streams. We then consider the motor system as a major source of rhythms (temporal priors) in auditory processing, that act in concert with attention to sharpen sensory representations and link them across areas. We discuss the anatomo-functional pathways that could mediate this audio-motor interaction, and notably the potential role of the somatosensory cortex. Finally, we reposition temporal predictions in the context of internal models, discussing how they interact with feature-based or spatial predictions. We argue that complementary predictions interact synergistically according to the organizational principles of each sensory system, forming multidimensional filters crucial to perception. PMID:25594376

  12. Relationship between volcanic activity and shallow hydrothermal system at Meakandake volcano, Japan, inferred from geomagnetic and audio-frequency magnetotelluric measurements

    NASA Astrophysics Data System (ADS)

    Takahashi, Kosuke; Takakura, Shinichi; Matsushima, Nobuo; Fujii, Ikuko

    2018-01-01

    Hydrothermal activity at Meakandake volcano, Japan, from 2004 to 2014 was investigated by using long-term geomagnetic field observations and audio-frequency magnetotelluric (AMT) surveys. The total intensity of the geomagnetic field has been measured around the summit crater Ponmachineshiri since 1992 by Kakioka Magnetic Observatory. We reanalyzed an 11-year dataset of the geomagnetic total intensity distribution and used it to estimate the thermomagnetic source models responsible for the surface geomagnetic changes during four time periods (2004-2006, 2006-2008, 2008-2009 and 2013-2014). The modeled sources suggest that the first two periods correspond to a cooling phase after a phreatic eruption in 1998, the third one to a heating phase associated with a phreatic eruption in 2008, and the last one to a heating phase accompanying minor internal activity in 2013. All of the thermomagnetic sources were beneath a location on the south side of Ponmachineshiri crater. In addition, we conducted AMT surveys in 2013 and 2014 at Meakandake and constructed a two-dimensional model of the electrical resistivity structure across the volcano. Combined, the resistivity information and thermomagnetic models revealed that the demagnetization source associated with the 2008 eruptive activity, causing a change in magnetic moment about 30 to 50 times greater than the other sources, was located about 1000 m beneath Ponmachineshiri crater, within or below a zone of high conductivity (a few ohm meters), whereas the other three sources were near each other and above this zone. We interpret the conductive zone as either a hydrothermal reservoir or an impermeable clay-rich layer acting as a seal above the hydrothermal reservoir. Along with other geophysical observations, our models suggest that the 2008 phreatic eruption was triggered by a rapid influx of heat into the hydrothermal reservoir through fluid-rich fractures developed during recent seismic swarms. The hydrothermal reservoir remained hot after the 2008 eruption, and heat was sporadically transported upward through its low permeability ceiling.

  13. YouTube as a patient-information source for root canal treatment.

    PubMed

    Nason, K; Donnelly, A; Duncan, H F

    2016-12-01

    To assess the content and completeness of Youtube ™ as an information source for patients undergoing root canal treatment procedures. YouTube ™ (https://www.youtube.com/) was searched for information using three relevant treatment search terms ('endodontics', 'root canal' and 'root canal treatment'). After exclusions (language, no audio, >15 min, duplicates), 20 videos per search term were selected. General video assessment included duration, ownership, views, age, likes/dislikes, target audience and video/audio quality, whilst content was analysed under six categories ('aetiology', 'anatomy', 'symptoms', 'procedure', 'postoperative course' and 'prognosis'). Content was scored for completeness level and statistically analysed using anova and post hoc Tukey's test (P < 0.05). To obtain 60 acceptable videos, 124 were assessed. Depending on the search term employed, the video content and ownership differed markedly. There was wide variation in both the number of video views and 'likes/dislikes'. The average video age was 788 days. In total, 46% of videos were 'posted' by a dentist/specialist source; however, this was search term specific rising to 70% of uploads for the search 'endodontic', whilst laypersons contributed 18% of uploads for the search 'root canal treatment'. Every video lacked content in the designated six categories, although 'procedure' details were covered more frequently and in better detail than other categories. Videos posted by dental professional (P = 0.046) and commercial sources (P = 0.009) were significantly more complete than videos posted by laypeople. YouTube ™ videos for endodontic search terms varied significantly by source and content and were generally incomplete. The danger of patient reliance on YouTube ™ is highlighted, as is the need for endodontic professionals to play an active role in directing patients towards alternative high-quality information sources. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  14. Electrical Conductivity Imaging Using Controlled Source Electromagnetics for Subsurface Characterization

    NASA Astrophysics Data System (ADS)

    Miller, C. R.; Routh, P. S.; Donaldson, P. R.

    2004-05-01

    Controlled Source Audio-Frequency Magnetotellurics (CSAMT) is a frequency domain electromagnetic (EM) sounding technique. CSAMT typically uses a grounded horizontal electric dipole approximately one to two kilometers in length as a source. Measurements of electric and magnetic field components are made at stations located ideally at least four skin depths away from the transmitter to approximate plane wave characteristics of the source. Data are acquired in a broad band frequency range that is sampled logarithmically from 0.1 Hz to 10 kHz. The usefulness of CSAMT soundings is to detect and map resistivity contrasts in the top two to three km of the Earth's surface. Some practical applications that CSAMT soundings have been used for include mapping ground water resources; mineral/precious metals exploration; geothermal reservoir mapping and monitoring; petroleum exploration; and geotechnical investigations. Higher frequency data can be used to image shallow features and lower frequency data are sensitive to deeper structures. We have a 3D CSAMT data set consisting of phase and amplitude measurements of the Ex and Hy components of the electric and magnetic fields respectively. The survey area is approximately 3 X 5 km. Receiver stations are situated 50 meters apart along a total of 13 lines with 8 lines bearing approximately N60E and the remainder of the lines oriented orthogonal to these 8 lines. We use an unconstrained Gauss-Newton method with positivity to invert the data. Inversion results will consist of conductivity versus depth profiles beneath each receiver station. These 1D profiles will be combined into a 3D subsurface conductivity image. We will include our interpretation of the subsurface conductivity structure and quantify the uncertainties associated with this interpretation.

  15. Effect of In-Vehicle Audio Warning System on Driver’s Speed Control Performance in Transition Zones from Rural Areas to Urban Areas

    PubMed Central

    Yan, Xuedong; Wang, Jiali; Wu, Jiawei

    2016-01-01

    Speeding is a major contributing factor to traffic crashes and frequently happens in areas where there is a mutation in speed limits, such as the transition zones that connect urban areas from rural areas. The purpose of this study is to investigate the effects of an in-vehicle audio warning system and lit speed limit sign on preventing drivers’ speeding behavior in transition zones. A high-fidelity driving simulator was used to establish a roadway network with the transition zone. A total of 41 participants were recruited for this experiment, and the driving speed performance data were collected from the simulator. The experimental results display that the implementation of the audio warning system could significantly reduce drivers’ operating speed before they entered the urban area, while the lit speed limit sign had a minimal effect on improving the drivers’ speed control performance. Without consideration of different types of speed limit signs, it is found that male drivers generally had a higher operating speed both upstream and in the transition zones and have a larger maximum deceleration for speed reduction than female drivers. Moreover, the drivers who had medium-level driving experience had the higher operating speed and were more likely to have speeding behaviors in the transition zones than those who had low-level and high-level driving experience in the transition zones. PMID:27347990

  16. Immediate early gene expression following exposure to acoustic and visual components of courtship in zebra finches.

    PubMed

    Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A

    2005-12-07

    Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.

  17. Responding Effectively to Composition Students: Comparing Student Perceptions of Written and Audio Feedback

    ERIC Educational Resources Information Center

    Bilbro, J.; Iluzada, C.; Clark, D. E.

    2013-01-01

    The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…

  18. Design and Usability Testing of an Audio Platform Game for Players with Visual Impairments

    ERIC Educational Resources Information Center

    Oren, Michael; Harding, Chris; Bonebright, Terri L.

    2008-01-01

    This article reports on the evaluation of a novel audio platform game that creates a spatial, interactive experience via audio cues. A pilot study with players with visual impairments, and usability testing comparing the visual and audio game versions using both sighted players and players with visual impairments, revealed that all the…

  19. 78 FR 57673 - Eighth Meeting: RTCA Special Committee 226, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... Committee 226, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 226, Audio Systems and Equipment. SUMMARY... Committee 226, Audio Systems and Equipment. DATES: The meeting will be held October 8-10, 2012 from 9:00 a.m...

  20. 77 FR 37732 - Fourteenth Meeting: RTCA Special Committee 224, Audio Systems and Equipment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... Committee 224, Audio Systems and Equipment AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 224, Audio Systems and Equipment. SUMMARY... Committee 224, Audio Systems and Equipment. DATES: The meeting will be held July 11, 2012, from 10 a.m.-4 p...

  1. 76 FR 57923 - Establishment of Rules and Policies for the Satellite Digital Audio Radio Service in the 2310...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ... Rules and Policies for the Satellite Digital Audio Radio Service in the 2310-2360 MHz Frequency Band... Digital Audio Radio Service (SDARS) Second Report and Order. The information collection requirements were... of these rule sections. See Satellite Digital Audio Radio Service (SDARS) Second Report and Order...

  2. The Use of Asynchronous Audio Feedback with Online RN-BSN Students

    ERIC Educational Resources Information Center

    London, Julie E.

    2013-01-01

    The use of audio technology by online nursing educators is a recent phenomenon. Research has been conducted in the area of audio technology in different domains and populations, but very few researchers have focused on nursing. Preliminary results have indicated that using audio in place of text can increase student cognition and socialization.…

  3. Computerized Audio-Visual Instructional Sequences (CAVIS): A Versatile System for Listening Comprehension in Foreign Language Teaching.

    ERIC Educational Resources Information Center

    Aleman-Centeno, Josefina R.

    1983-01-01

    Discusses the development and evaluation of CAVIS, which consists of an Apple microcomputer used with audiovisual dialogs. Includes research on the effects of three conditions: (1) computer with audio and visual, (2) computer with audio alone and (3) audio alone in short-term and long-term recall. (EKN)

  4. Low-delay predictive audio coding for the HIVITS HDTV codec

    NASA Astrophysics Data System (ADS)

    McParland, A. K.; Gilchrist, N. H. C.

    1995-01-01

    The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.

  5. Open-Source Multi-Language Audio Database for Spoken Language Processing Applications

    DTIC Science & Technology

    2012-12-01

    Mandarin, and Russian . Approximately 30 hours of speech were collected for each language. Each passage has been carefully transcribed at the...manual and automatic methods. The Russian passages have not yet been marked at the phonetic level. Another phase of the work was to explore...You Tube. 300 passages were collected in each of three languages—English, Mandarin, and Russian . Approximately 30 hours of speech were

  6. Information acquisition from audio-video-data sources: an experimental study on remote diagnosis. The LOTAS Group.

    PubMed

    Xiao, Y; MacKenzie, C; Orasanu, J; Spencer, R; Rahman, A; Gunawardane, V

    1999-01-01

    To determine what information sources are used during a remote diagnosis task. Experienced trauma care providers viewed segments of videotaped initial trauma patient resuscitation and airway management. Experiment 1 collected responses from anesthesiologists to probing questions during and after the presentation of recorded video materials. Experiment 2 collected the responses from three types of care providers (anesthesiologists, nurses, and surgeons). Written and verbal responses were scored according to detection of critical events in video materials and categorized according to their content. Experiment 3 collected visual scanning data using an eyetracker during the viewing of recorded video materials from the three types of care providers. Eye-gaze data were analyzed in terms of focus on various parts of the videotaped materials. Care providers were found to be unable to detect several critical events. The three groups of subjects studied (anesthesiologists, nurses, and surgeons) focused on different aspects of videotaped materials. When the remote events and activities are multidisciplinary and rapidly changing, experts linked with audio-video-data connections may encounter difficulties in comprehending remote activities, and their information usage may be biased. Special training is needed for the remote decision-maker to appreciate tasks outside his or her speciality and beyond the boundaries of traditional divisions of labor.

  7. Inductive Interference in Rapid Transit Signaling Systems. Volume 1. Theory and Background.

    DOT National Transportation Integrated Search

    1986-05-01

    This report describes the mechanism of inductive interference to audio frequency (AF) signaling systems used in rail transit operations, caused by rail transit vehicles with chopper propulsion control. Choppers are switching circuits composed of high...

  8. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...

  9. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...

  10. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...

  11. Learning diagnostic models using speech and language measures.

    PubMed

    Peintner, Bart; Jarrold, William; Vergyriy, Dimitra; Richey, Colleen; Tempini, Maria Luisa Gorno; Ogar, Jennifer

    2008-01-01

    We describe results that show the effectiveness of machine learning in the automatic diagnosis of certain neurodegenerative diseases, several of which alter speech and language production. We analyzed audio from 9 control subjects and 30 patients diagnosed with one of three subtypes of Frontotemporal Lobar Degeneration. From this data, we extracted features of the audio signal and the words the patient used, which were obtained using our automated transcription technologies. We then automatically learned models that predict the diagnosis of the patient using these features. Our results show that learned models over these features predict diagnosis with accuracy significantly better than random. Future studies using higher quality recordings will likely improve these results.

  12. Orchestrating the Move to Student-Driven Learning

    ERIC Educational Resources Information Center

    Kallick, Bena; Zmuda, Allison

    2017-01-01

    "We might view the movement from teacher-directed to student-driven learning as a set of controls, much like the controls on an audio sound board," write Bena Kallick and Allison Zmuda. For each element of personalization, the teacher can turn the volume up or down, amplifying or reducing the amount of student agency depending on the…

  13. The Application of Acoustic Measurements and Audio Recordings for Diagnosis of In-Flight Hardware Anomalies

    NASA Technical Reports Server (NTRS)

    Welsh, David; Denham, Samuel; Allen, Christopher

    2011-01-01

    In many cases, an initial symptom of hardware malfunction is unusual or unexpected acoustic noise. Many industries such as automotive, heating and air conditioning, and petro-chemical processing use noise and vibration data along with rotating machinery analysis techniques to identify noise sources and correct hardware defects. The NASA/Johnson Space Center Acoustics Office monitors the acoustic environment of the International Space Station (ISS) through periodic sound level measurement surveys. Trending of the sound level measurement survey results can identify in-flight hardware anomalies. The crew of the ISS also serves as a "detection tool" in identifying unusual hardware noises; in these cases the spectral analysis of audio recordings made on orbit can be used to identify hardware defects that are related to rotating components such as fans, pumps, and compressors. In this paper, three examples of the use of sound level measurements and audio recordings for the diagnosis of in-flight hardware anomalies are discussed: identification of blocked inter-module ventilation (IMV) ducts, diagnosis of abnormal ISS Crew Quarters rack exhaust fan noise, and the identification and replacement of a defective flywheel assembly in the Treadmill with Vibration Isolation (TVIS) hardware. In each of these examples, crew time was saved by identifying the off nominal component or condition that existed and in directing in-flight maintenance activities to address and correct each of these problems.

  14. Dual-Layer Video Encryption using RSA Algorithm

    NASA Astrophysics Data System (ADS)

    Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.

    2015-04-01

    This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.

  15. Audio-tactile integration and the influence of musical training.

    PubMed

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  16. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  17. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  18. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  19. 47 CFR 73.402 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Digital Audio Broadcasting § 73.402 Definitions. (a) DAB. Digital audio broadcast stations are those radio... into multiple channels for additional audio programming uses. (g) Datacasting. Subdividing the digital...

  20. Open-Loop Audio-Visual Stimulation (AVS): A Useful Tool for Management of Insomnia?

    PubMed

    Tang, Hsin-Yi Jean; Riegel, Barbara; McCurry, Susan M; Vitiello, Michael V

    2016-03-01

    Audio Visual Stimulation (AVS), a form of neurofeedback, is a non-pharmacological intervention that has been used for both performance enhancement and symptom management. We review the history of AVS, its two sub-types (close- and open-loop), and discuss its clinical implications. We also describe a promising new application of AVS to improve sleep, and potentially decrease pain. AVS research can be traced back to the late 1800s. AVS's efficacy has been demonstrated for both performance enhancement and symptom management. Although AVS is commonly used in clinical settings, there is limited literature evaluating clinical outcomes and mechanisms of action. One of the challenges to AVS research is the lack of standardized terms, which makes systematic review and literature consolidation difficult. Future studies using AVS as an intervention should; (1) use operational definitions that are consistent with the existing literature, such as AVS, Audio-visual Entrainment, or Light and Sound Stimulation, (2) provide a clear rationale for the chosen training frequency modality, (3) use a randomized controlled design, and (4) follow the Consolidated Standards of Reporting Trials and/or related guidelines when disseminating results.

  1. Unsupervised Decoding of Long-Term, Naturalistic Human Neural Recordings with Automated Video and Audio Annotations

    PubMed Central

    Wang, Nancy X. R.; Olson, Jared D.; Ojemann, Jeffrey G.; Rao, Rajesh P. N.; Brunton, Bingni W.

    2016-01-01

    Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018

  2. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less

  3. Comparing the Effects of Classroom Audio-Recording and Video-Recording on Preservice Teachers' Reflection of Practice

    ERIC Educational Resources Information Center

    Bergman, Daniel

    2015-01-01

    This study examined the effects of audio and video self-recording on preservice teachers' written reflections. Participants (n = 201) came from a secondary teaching methods course and its school-based (clinical) fieldwork. The audio group (n[subscript A] = 106) used audio recorders to monitor their teaching in fieldwork placements; the video group…

  4. Transana Qualitative Video and Audio Analysis Software as a Tool for Teaching Intellectual Assessment Skills to Graduate Psychology Students

    ERIC Educational Resources Information Center

    Rush, S. Craig

    2014-01-01

    This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…

  5. Development and Assessment of Web Courses That Use Streaming Audio and Video Technologies.

    ERIC Educational Resources Information Center

    Ingebritsen, Thomas S.; Flickinger, Kathleen

    Iowa State University, through a program called Project BIO (Biology Instructional Outreach), has been using RealAudio technology for about 2 years in college biology courses that are offered entirely via the World Wide Web. RealAudio is a type of streaming media technology that can be used to deliver audio content and a variety of other media…

  6. Audio distribution and Monitoring Circuit

    NASA Technical Reports Server (NTRS)

    Kirkland, J. M.

    1983-01-01

    Versatile circuit accepts and distributes TV audio signals. Three-meter audio distribution and monitoring circuit provides flexibility in monitoring, mixing, and distributing audio inputs and outputs at various signal and impedance levels. Program material is simultaneously monitored on three channels, or single-channel version built to monitor transmitted or received signal levels, drive speakers, interface to building communications, and drive long-line circuits.

  7. Hearing You Loud and Clear: Student Perspectives of Audio Feedback in Higher Education

    ERIC Educational Resources Information Center

    Gould, Jill; Day, Pat

    2013-01-01

    The use of audio feedback for students in a full-time community nursing degree course is appraised. The aim of this mixed methods study was to examine student views on audio feedback for written assignments. Questionnaires and a focus group were used to capture student opinion of this pilot project. The majority of students valued audio feedback…

  8. How we give personalised audio feedback after summative OSCEs.

    PubMed

    Harrison, Christopher J; Molyneux, Adrian J; Blackwell, Sara; Wass, Valerie J

    2015-04-01

    Students often receive little feedback after summative objective structured clinical examinations (OSCEs) to enable them to improve their performance. Electronic audio feedback has shown promise in other educational areas. We investigated the feasibility of electronic audio feedback in OSCEs. An electronic OSCE system was designed, comprising (1) an application for iPads allowing examiners to mark in the key consultation skill domains, provide "tick-box" feedback identifying strengths and difficulties, and record voice feedback; (2) a feedback website giving students the opportunity to view/listen in multiple ways to the feedback. Acceptability of the audio feedback was investigated, using focus groups with students and questionnaires with both examiners and students. 87 (95%) students accessed the examiners' audio comments; 83 (90%) found the comments useful and 63 (68%) reported changing the way they perform a skill as a result of the audio feedback. They valued its highly personalised, relevant nature and found it much more useful than written feedback. Eighty-nine per cent of examiners gave audio feedback to all students on their stations. Although many found the method easy, lack of time was a factor. Electronic audio feedback provides timely, personalised feedback to students after a summative OSCE provided enough time is allocated to the process.

  9. Source monitoring and false memories in children: relation to certainty and executive functioning.

    PubMed

    Ruffman, T; Rustin, C; Garnham, W; Parkin, A J

    2001-10-01

    We presented children aged 6, 8, and 10 years with a video and then an audio tape about a dog named Mick. Some information was repeated in the two sources and some was unique to one source. We examined: (a) children's hit rate for remembering whether events occurred and their tendency to make false alarms, (b) their memory for the context in which events occurred (source monitoring), (c) their certainty about hits, false alarms, and source, and (d) whether working memory and inhibition were related to hits, false alarms, and source monitoring. The certainty ratings revealed deficits in children's understanding of when they had erred on source questions and of when they had made false alarms. In addition, inhibitory ability accounted for unique variance in the ability to avoid false alarms and in some kinds of source monitoring but not hits. In contrast, working memory tended to correlate with all forms of memory including hits. Copyright 2001 Academic Press.

  10. Audio Steganography with Embedded Text

    NASA Astrophysics Data System (ADS)

    Teck Jian, Chua; Chai Wen, Chuah; Rahman, Nurul Hidayah Binti Ab.; Hamid, Isredza Rahmi Binti A.

    2017-08-01

    Audio steganography is about hiding the secret message into the audio. It is a technique uses to secure the transmission of secret information or hide their existence. It also may provide confidentiality to secret message if the message is encrypted. To date most of the steganography software such as Mp3Stego and DeepSound use block cipher such as Advanced Encryption Standard or Data Encryption Standard to encrypt the secret message. It is a good practice for security. However, the encrypted message may become too long to embed in audio and cause distortion of cover audio if the secret message is too long. Hence, there is a need to encrypt the message with stream cipher before embedding the message into the audio. This is because stream cipher provides bit by bit encryption meanwhile block cipher provide a fixed length of bits encryption which result a longer output compare to stream cipher. Hence, an audio steganography with embedding text with Rivest Cipher 4 encryption cipher is design, develop and test in this project.

  11. Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.

    PubMed

    Alm, Magnus; Behne, Dawn

    2013-10-01

    Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.

  12. Ultrasonic speech translator and communications system

    DOEpatents

    Akerman, M.A.; Ayers, C.W.; Haynes, H.D.

    1996-07-23

    A wireless communication system undetectable by radio frequency methods for converting audio signals, including human voice, to electronic signals in the ultrasonic frequency range, transmitting the ultrasonic signal by way of acoustical pressure waves across a carrier medium, including gases, liquids, or solids, and reconverting the ultrasonic acoustical pressure waves back to the original audio signal. The ultrasonic speech translator and communication system includes an ultrasonic transmitting device and an ultrasonic receiving device. The ultrasonic transmitting device accepts as input an audio signal such as human voice input from a microphone or tape deck. The ultrasonic transmitting device frequency modulates an ultrasonic carrier signal with the audio signal producing a frequency modulated ultrasonic carrier signal, which is transmitted via acoustical pressure waves across a carrier medium such as gases, liquids or solids. The ultrasonic receiving device converts the frequency modulated ultrasonic acoustical pressure waves to a frequency modulated electronic signal, demodulates the audio signal from the ultrasonic carrier signal, and conditions the demodulated audio signal to reproduce the original audio signal at its output. 7 figs.

  13. Investigating Perceptual Biases, Data Reliability, and Data Discovery in a Methodology for Collecting Speech Errors From Audio Recordings.

    PubMed

    Alderete, John; Davies, Monica

    2018-04-01

    This work describes a methodology of collecting speech errors from audio recordings and investigates how some of its assumptions affect data quality and composition. Speech errors of all types (sound, lexical, syntactic, etc.) were collected by eight data collectors from audio recordings of unscripted English speech. Analysis of these errors showed that: (i) different listeners find different errors in the same audio recordings, but (ii) the frequencies of error patterns are similar across listeners; (iii) errors collected "online" using on the spot observational techniques are more likely to be affected by perceptual biases than "offline" errors collected from audio recordings; and (iv) datasets built from audio recordings can be explored and extended in a number of ways that traditional corpus studies cannot be.

  14. Health marketing information: an assessment of past and future utilization patterns.

    PubMed

    McSurely, H B; Fullerton, S

    1995-01-01

    A sample of 108 members of the Academy of Health Services Marketing provided bibliographic citations of 629 sources of information which have been important to them in their jobs. The results indicate that the propensity to rely upon a source is dependent upon the topic of the information sought. The sources under scrutiny were consultants, books, journals, magazines, seminars, conferences, video tapes, and audio tapes. The topics considered included the variables of the marketing mix as well as market planning and marketing research. The discussion provides insight about where seekers of health care marketing knowledge go for specific kinds of information. It also suggests types of media that information-providers should consider for dissemination of their material.

  15. ASTP video tape recorder ground support equipment (audio/CTE splitter/interleaver). Operations manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A descriptive handbook for the audio/CTE splitter/interleaver (RCA part No. 8673734-502) was presented. This unit is designed to perform two major functions: extract audio and time data from an interleaved video/audio signal (splitter section), and provide a test interleaved video/audio/CTE signal for the system (interleaver section). It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.

  16. Paper-Based Textbooks with Audio Support for Print-Disabled Students.

    PubMed

    Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko

    2015-01-01

    Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.

  17. Listeners' expectation of room acoustical parameters based on visual cues

    NASA Astrophysics Data System (ADS)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.

  18. Musical examination to bridge audio data and sheet music

    NASA Astrophysics Data System (ADS)

    Pan, Xunyu; Cross, Timothy J.; Xiao, Liangliang; Hei, Xiali

    2015-03-01

    The digitalization of audio is commonly implemented for the purpose of convenient storage and transmission of music and songs in today's digital age. Analyzing digital audio for an insightful look at a specific musical characteristic, however, can be quite challenging for various types of applications. Many existing musical analysis techniques can examine a particular piece of audio data. For example, the frequency of digital sound can be easily read and identified at a specific section in an audio file. Based on this information, we could determine the musical note being played at that instant, but what if you want to see a list of all the notes played in a song? While most existing methods help to provide information about a single piece of the audio data at a time, few of them can analyze the available audio file on a larger scale. The research conducted in this work considers how to further utilize the examination of audio data by storing more information from the original audio file. In practice, we develop a novel musical analysis system Musicians Aid to process musical representation and examination of audio data. Musicians Aid solves the previous problem by storing and analyzing the audio information as it reads it rather than tossing it aside. The system can provide professional musicians with an insightful look at the music they created and advance their understanding of their work. Amateur musicians could also benefit from using it solely for the purpose of obtaining feedback about a song they were attempting to play. By comparing our system's interpretation of traditional sheet music with their own playing, a musician could ensure what they played was correct. More specifically, the system could show them exactly where they went wrong and how to adjust their mistakes. In addition, the application could be extended over the Internet to allow users to play music with one another and then review the audio data they produced. This would be particularly useful for teaching music lessons on the web. The developed system is evaluated with songs played with guitar, keyboard, violin, and other popular musical instruments (primarily electronic or stringed instruments). The Musicians Aid system is successful at both representing and analyzing audio data and it is also powerful in assisting individuals interested in learning and understanding music.

  19. Divergence correction schemes in finite difference method for 3D tensor CSAMT in axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong; Zhang, Zhiyong; Li, Zhiqiang; Cao, Meng

    2017-05-01

    Resistivity anisotropy and full-tensor controlled-source audio-frequency magnetotellurics (CSAMT) have gradually become hot research topics. However, much of the current anisotropy research for tensor CSAMT only focuses on the one-dimensional (1D) solution. As the subsurface is rarely 1D, it is necessary to study three-dimensional (3D) model response. The staggered-grid finite difference method is an effective simulation method for 3D electromagnetic forward modelling. Previous studies have suggested using the divergence correction to constrain the iterative process when using a staggered-grid finite difference model so as to accelerate the 3D forward speed and enhance the computational accuracy. However, the traditional divergence correction method was developed assuming an isotropic medium. This paper improves the traditional isotropic divergence correction method and derivation process to meet the tensor CSAMT requirements for anisotropy using the volume integral of the divergence equation. This method is more intuitive, enabling a simple derivation of a discrete equation and then calculation of coefficients related to the anisotropic divergence correction equation. We validate the result of our 3D computational results by comparing them to the results computed using an anisotropic, controlled-source 2.5D program. The 3D resistivity anisotropy model allows us to evaluate the consequences of using the divergence correction at different frequencies and for two orthogonal finite length sources. Our results show that the divergence correction plays an important role in 3D tensor CSAMT resistivity anisotropy research and offers a solid foundation for inversion of CSAMT data collected over an anisotropic body.

  20. An evaluation of the applicability of the telluric-electric and audio-magnetotelluric methods to mineral assessment on the Arabian Shield, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Flanigan, Vincent J.; Zablocki, Charles J.

    1984-01-01

    Feasibility studies of two electromagnetic methods were made in selected areas of the Jabal Hibshi (1:250,000) quadrangle, 26F, in the Kingdom of Saudi Arabia in March of 1983. The methods tested were the natural source-field telluricelectric and audio-magnetotelluric methods developed and extensively used in recent years by the U.S. Geological Survey in some of its domestic programs related to geothermal and mineral resource assessment. Results from limited studies in the Meshaheed district, the Jabal as Silsilah ring complex, and across a portion of the Raha fault zone clearly demonstrate the appropriateness of these sub-regional scale, reconnaissance-type studies to mineral resource assessment. The favorable results obtained are largely attributed to distinctive and large contrasts in the electrical resistivity of the major rock types encountered. It appears that the predominant controlling factor governing the rock resistivities is the amount of contained clay minerals. Accordingly, unaltered (specifically, non-argillic) igneous and metamorphic rocks have very high resistivities; metasedimentary rocks of the Murdama group that contain several percent clay minerals have intermediate values of resistivity; and highly altered rocks, containing abundant clay minerals, have very low values of resistivity. Water-filled fracture porosity may be a secondary, but important, factor in some settings. However, influences from variations in interstitial or intercrystalline, water-filled porosity are probably small because these types of porosity are generally low. It is reasonable to expect similar results in other areas within the Arabian Shield.

  1. Horatio Audio-Describes Shakespeare's "Hamlet": Blind and Low-Vision Theatre-Goers Evaluate an Unconventional Audio Description Strategy

    ERIC Educational Resources Information Center

    Udo, J. P.; Acevedo, B.; Fels, D. I.

    2010-01-01

    Audio description (AD) has been introduced as one solution for providing people who are blind or have low vision with access to live theatre, film and television content. However, there is little research to inform the process, user preferences and presentation style. We present a study of a single live audio-described performance of Hart House…

  2. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    PubMed Central

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production. Behavioural and functional magnetic resonance imaging data were collected before and after the treatment phase. Patients were able to produce a greater variety of words with and without speech entrainment at 1 and 6 weeks after training. Treatment-related decrease in cortical activation associated with speech entrainment was found in areas of the left posterior-inferior parietal lobe. We conclude that speech entrainment allows patients with Broca’s aphasia to double their speech output compared with spontaneous speech. Neuroimaging results suggest that speech entrainment allows patients to produce fluent speech by providing an external gating mechanism that yokes a ventral language network that encodes conceptual aspects of speech. Preliminary results suggest that training with speech entrainment improves speech production in Broca’s aphasia providing a potential therapeutic method for a disorder that has been shown to be particularly resistant to treatment. PMID:23250889

  3. Self-directed study using MP3 players to improve auscultation proficiency of physicians: a randomized, controlled trial.

    PubMed

    Donato, Anthony A; Kaliyadan, Antony G; Wasser, Thomas

    2014-01-01

    Studies of physicians at all levels of training demonstrate significant deficiencies in cardiac auscultation skills. The best instructional methods to augment these skills are not known. This study was a randomized, controlled trial of 83 noncardiologist volunteers exposed to a 12-week lower cognitive load self-study group using MP3 players containing heart sound audio files compared to a group receiving a 1-time 1-hour higher cognitive load multimedia lecture using the same audio files. The primary outcome measure was change in 15-question posttest score at 4 and 12 weeks as compared to pretest on recognition of identical audio files introduced during training. In the self-study group, the association of total exposure and deliberate practice effort (estimated by standard deviation of files played/mean) to improvement in test score was measured as a secondary end point. Self-study group participants improved as compared to pretest by 4.42 ± 3.41 answers correct at 12 weeks (5.09-9.51 correct, p < .001), while those exposed to the multimedia lecture improved by an average of 1.13 ± 3.2 answers correct (4.48-5.61 correct, p = .03). In the self-study arm, improvement in the posttest was positively associated with both total exposure (β = 0.55, p < .001) and deliberate practice score (β = 0.31, p = .02). A lower cognitive load self-study of audio files improved recognition of cardiac sounds, as compared to multimedia lecture, and deliberate practice strategies improved study efficiency. More investigation is needed to assess transfer of learning to a wider range of cardiac sounds in both simulated and clinical environments. © 2014 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.

  4. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Unified physical mechanism of frequency-domain controlled-source electromagnetic exploration on land and in ocean

    NASA Astrophysics Data System (ADS)

    Liu, Changsheng; Lin, Jun; Zhou, Fengdao; Hu, Ruihua; Sun, Caitang

    2013-12-01

    The frequency-domain controlled-source electromagnetic method (FDCSEM) has played an important role in the terrestrial and oceanic exploration. However, the measuring manners and the detecting abilities in two kinds of environment are much different. This paper analyses the electromagnetic theories of the FDCSEM exploration on land and in ocean, simulates the electromagnetic responses in the two cases based on a united physical and mathematical model, and studies the physical mechanism leading to these differences. In this study, the relationship between the propagation paths and the detecting ability is illuminated and the way to improve the detecting ability of FDCSEM is brought forward. In terrestrial exploration, FDCSEM widely adopts the measuring manner of controlled-source audio-frequency magnetotelluric method (CSAMT), which records the electromagnetic fields in the far zone in the broadside direction of an electric dipole source. This manner utilizes the airwave (i.e. the Earth surface wave) and takes the stratum wave as interference. It is sensitive to the conductive target but insensitive to the resistive one. In oceanic exploration, FDCSEM usually adopts the measuring manner of marine controlled-source electromagnetic method (MCSEM), which records the electromagnetic fields, commonly the horizontal electric fields, in the in-line direction of the electric dipole source. This manner utilizes the stratum wave (i.e. the seafloor wave and the guided wave in resistive targets) and takes the airwave as interference. It is sensitive to the resistive target but relatively insensitive to the conductive one. The numerical simulation shows that both the airwave and the stratum wave contribute to the FDCSEM exploration. United utilization of them will enhance the anomalies of targets and congregate the advantages of CSAMT and MCSEM theories. At different azimuth and different offset, the contribution of the airwave and the stratum wave to electromagnetic anomaly is different. Observation at moderate offset in the in-line direction is the best choice for the exploration of resistive targets, no matter the environment is land or shallow sea. It is also the best choice for the exploration of conductive targets in terrestrial environment. As for the conductive targets in shallow sea, observation at moderate offset in the broadside direction is better. Synthetic and felicitous utilization of the airwave and the stratum wave will optimize the performance of FDCSEM.

  6. Digital Audio Application to Short Wave Broadcasting

    NASA Technical Reports Server (NTRS)

    Chen, Edward Y.

    1997-01-01

    Digital audio is becoming prevalent not only in consumer electornics, but also in different broadcasting media. Terrestrial analog audio broadcasting in the AM and FM bands will be eventually be replaced by digital systems.

  7. Steganalysis of recorded speech

    NASA Astrophysics Data System (ADS)

    Johnson, Micah K.; Lyu, Siwei; Farid, Hany

    2005-03-01

    Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.

  8. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  9. Coastal Surveillance

    DTIC Science & Technology

    1980-04-01

    testing of sonobuoys, an analysis of expected important factors can be made. Using the passive- sonar ’ equation: SL - TL - NL - 01 DT Source...circuitry have been removed or disabled and 2) the input stage has been modified to adapt a different hydrophone/ preamplifier to the system, 3) in two...ground and high lead which is carrying both audio up and power down to the preamplifier . The drain load resistor for the F.E.T. preamplifier is on the

  10. Calibration of Speed Enforcement Down-The-Road Radars

    PubMed Central

    Jendzurski, John; Paulter, Nicholas G.

    2009-01-01

    We examine the measurement uncertainty associated with different methods of calibrating the ubiquitous down-the-road (DTR) radar used in speed enforcement. These calibration methods include the use of audio frequency sources, tuning forks, a fifth wheel attached to the rear of the vehicle with the radar unit, and the speedometer of the vehicle. We also provide an analysis showing the effect of calibration uncertainty on DTR-radar speed measurement uncertainty. PMID:27504217

  11. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.

  12. Design and Implementation of a Video-Zoom Driven Digital Audio-Zoom System for Portable Digital Imaging Devices

    NASA Astrophysics Data System (ADS)

    Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won

    In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.

  13. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  14. Impact of audio narrated animation on students' understanding and learning environment based on gender

    NASA Astrophysics Data System (ADS)

    Nasrudin, Ajeng Ratih; Setiawan, Wawan; Sanjaya, Yayan

    2017-05-01

    This study is titled the impact of audio narrated animation on students' understanding in learning humanrespiratory system based on gender. This study was conducted in eight grade of junior high school. This study aims to investigate the difference of students' understanding and learning environment at boys and girls classes in learning human respiratory system using audio narrated animation. Research method that is used is quasy experiment with matching pre-test post-test comparison group design. The procedures of study are: (1) preliminary study and learning habituation using audio narrated animation; (2) implementation of learning using audio narrated animation and taking data; (3) analysis and discussion. The result of analysis shows that there is significant difference on students' understanding and learning environment at boys and girls classes in learning human respiratory system using audio narrated animation, both in general and specifically in achieving learning indicators. The discussion related to the impact of audio narrated animation, gender characteristics, and constructivist learning environment. It can be concluded that there is significant difference of students' understanding at boys and girls classes in learning human respiratory system using audio narrated animation. Additionally, based on interpretation of students' respond, there is the difference increment of agreement level in learning environment.

  15. Google Sky as an Interactive Content Delivery System

    NASA Astrophysics Data System (ADS)

    Parrish, Michael

    2009-05-01

    In support of the International Year of Astronomy New Media Task Group's mission to create online astronomy content, several existing technologies are being leveraged. With this undertaking in mind, Google Sky provides an immersive contextual environment for both exploration and content presentation. As such, it affords opportunities for new methods of interactive media delivery. Traditional astronomy news sources and blogs are able to literally set a story at the location of their topic. Furthermore, audio based material can be complimented by a series of locations in the form of a guided tour. In order to provide automated generation and management of this content, an open source software suite has been developed.

  16. Speech-Message Extraction from Interference Introduced by External Distributed Sources

    NASA Astrophysics Data System (ADS)

    Kanakov, V. A.; Mironov, N. A.

    2017-08-01

    The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.

  17. Use of sonification in the detection of anomalous events

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Cole, Robert J.; Kruesi, Heidi; Greene, Herbert; Monahan, Ganesh; Hall, David L.

    2012-06-01

    In this paper, we describe the construction of a soundtrack that fuses stock market data with information taken from tweets. This soundtrack, or auditory display, presents the numerical and text data in such a way that anomalous events may be readily detected, even by untrained listeners. The soundtrack generation is flexible, allowing an individual listener to create a unique audio mix from the available information sources. Properly constructed, the display exploits the auditory system's sensitivities to periodicities, to dynamic changes, and to patterns. This type of display could be valuable in environments that demand high levels of situational awareness based on multiple sources of incoming information.

  18. Real Time Implementation of an LPC Algorithm. Speech Signal Processing Research at CHI

    DTIC Science & Technology

    1975-05-01

    SIGNAL PROCESSING HARDWARE 2-1 2.1 INTRODUCTION 2-1 2.2 TWO- CHANNEL AUDIO SIGNAL SYSTEM 2-2 2.3 MULTI- CHANNEL AUDIO SIGNAL SYSTEM 2-5 2.3.1... Channel Audio Signal System 2-30 I ii kv^i^ünt«.jfc*. ji .„* ,:-v*. ’.ii. *.. ...... — ■ -,,.,-c-» —ipponp ■^ TOHaBWgBpwiBWgPlpaiPWgW v.«.wN...Messages .... 1-55 1-13. Lost or Out of Order Message 1-56 2-1. Block Diagram of Two- Channel Audio Signal System . . 2-3 2-2. Block Diagram of Audio

  19. 2D approaches to 3D watermarking: state-of-the-art and perspectives

    NASA Astrophysics Data System (ADS)

    Mitrea, M.; Duţă, S.; Prêteux, F.

    2006-02-01

    With the advent of the Information Society, video, audio, speech, and 3D media represent the source of huge economic benefits. Consequently, there is a continuously increasing demand for protecting their related intellectual property rights. The solution can be provided by robust watermarking, a research field which exploded in the last 7 years. However, the largest part of the scientific effort was devoted to video and audio protection, the 3D objects being quite neglected. In the absence of any standardisation attempt, the paper starts by summarising the approaches developed in this respect and by further identifying the main challenges to be addressed in the next years. Then, it describes an original oblivious watermarking method devoted to the protection of the 3D objects represented by NURBS (Non uniform Rational B Spline) surfaces. Applied to both free form objects and CAD models, the method exhibited very good transparency (no visible differences between the marked and the unmarked model) and robustness (with respect to both traditional attacks and to NURBS processing).

  20. “I Can Never Be Too Comfortable”: Race, Gender, and Emotion at the Hospital Bedside

    PubMed Central

    Cottingham, Marci D.; Johnson, Austin H.; Erickson, Rebecca J.

    2017-01-01

    In this article, we examine how race and gender shape nurses’ emotion practice. Based on audio diaries collected from 48 nurses within two Midwestern hospital systems in the United States, we illustrate the disproportionate emotional labor that emerges among women nurses of color in the white institutional space of American health care. In this environment, women of color experience an emotional double shift as a result of negotiating patient, coworker, and supervisor interactions. In confronting racist encounters, nurses of color in our sample experience additional job-related stress, must perform disproportionate amounts of emotional labor, and experience depleted emotional resources that negatively influence patient care. Methodologically, the study extends prior research by using audio diaries collected from a racially diverse sample to capture emotion as a situationally emergent and complex feature of nursing practice. We also extend research on nursing by tracing both the sources and consequences of unequal emotion practices for nurse well-being and patient care. PMID:29094641

  1. Geophysical exploration with audio frequency magnetic fields

    NASA Astrophysics Data System (ADS)

    Labson, V. F.

    1985-12-01

    Experience with the Audio Frequency Magnetic (AFMAG) method has demonstrated that an electromagnetic exploration system using the Earth's natural audiofrequency magnetic fields as an energy source, is capable of mapping subsurface electrical structure in the upper kilometer of the Earth's crust. The limitations are resolved by adapting the tensor analysis and remote reference noise bias removal techniques from the geomagnetic induction and magnetotelluric methods to the computation of the tippers. After a through spectral study of the natural magnetic fields, lightweight magnetic field sensors, capable of measuring the magnetic field throughout the year were designed. A digital acquisition and processing sytem, with the ability to provide audiofrequency tipper results in the field, was then built to complete the apparatus. The new instrumetnation was used in a study of the Mariposa, California site previously mapped with AFMAG. The usefulness of natural magnetic field data in mapping an electrical conductive body was again demonstrated. Several field examples are used to demonstrate that the proposed procedure yields reasonable results.

  2. BOLDSync: a MATLAB-based toolbox for synchronized stimulus presentation in functional MRI.

    PubMed

    Joshi, Jitesh; Saharan, Sumiti; Mandal, Pravat K

    2014-02-15

    Precise and synchronized presentation of paradigm stimuli in functional magnetic resonance imaging (fMRI) is central to obtaining accurate information about brain regions involved in a specific task. In this manuscript, we present a new MATLAB-based toolbox, BOLDSync, for synchronized stimulus presentation in fMRI. BOLDSync provides a user friendly platform for design and presentation of visual, audio, as well as multimodal audio-visual (AV) stimuli in functional imaging experiments. We present simulation experiments that demonstrate the millisecond synchronization accuracy of BOLDSync, and also illustrate the functionalities of BOLDSync through application to an AV fMRI study. BOLDSync gains an advantage over other available proprietary and open-source toolboxes by offering a user friendly and accessible interface that affords both precision in stimulus presentation and versatility across various types of stimulus designs and system setups. BOLDSync is a reliable, efficient, and versatile solution for synchronized stimulus presentation in fMRI study. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Recording and reading of information on optical disks

    NASA Astrophysics Data System (ADS)

    Bouwhuis, G.; Braat, J. J. M.

    In the storage of information, related to video programs, in a spiral track on a disk, difficulties arise because the bandwidth for video is much greater than for audio signals. An attractive solution was found in optical storage. The optical noncontact method is free of wear, and allows for fast random access. Initial problems regarding a suitable light source could be overcome with the aid of appropriate laser devices. The basic concepts of optical storage on disks are treated insofar as they are relevant for the optical arrangement. A general description is provided of a video, a digital audio, and a data storage system. Scanning spot microscopy for recording and reading of optical disks is discussed, giving attention to recording of the signal, the readout of optical disks, the readout of digitally encoded signals, and cross talk. Tracking systems are also considered, taking into account the generation of error signals for radial tracking and the generation of focus error signals.

  4. Review of Audio Interfacing Literature for Computer-Assisted Music Instruction.

    ERIC Educational Resources Information Center

    Watanabe, Nan

    1980-01-01

    Presents a review of the literature dealing with audio devices used in computer assisted music instruction and discusses the need for research and development of reliable, cost-effective, random access audio hardware. (Author)

  5. Effects of a Stimulant Drug on Extraversion Level in Hyperactive Children.

    ERIC Educational Resources Information Center

    Mc Manis, Donald L.; And Others

    1978-01-01

    Seven hyperactive children in a pilot study, and 15 hyperactive and 15 nonhyperactive control children in a later study, were assessed for salivation to lemon juice stimulation, reactive inhibition on an audio-vigilance task, and visual-motor maze errors. (Author)

  6. Comparison between audio-only and audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy.

    PubMed

    Yu, Jesang; Choi, Ji Hoon; Ma, Sun Young; Jeung, Tae Sig; Lim, Sangwook

    2015-09-01

    To compare audio-only biofeedback to conventional audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy, limiting damage to healthy surrounding tissues caused by organ movement. Six healthy volunteers were assisted by audiovisual or audio-only biofeedback systems to regulate their respirations. Volunteers breathed through a mask developed for this study by following computer-generated guiding curves displayed on a screen, combined with instructional sounds. They then performed breathing following instructional sounds only. The guiding signals and the volunteers' respiratory signals were logged at 20 samples per second. The standard deviations between the guiding and respiratory curves for the audiovisual and audio-only biofeedback systems were 21.55% and 23.19%, respectively; the average correlation coefficients were 0.9778 and 0.9756, respectively. The regularities between audiovisual and audio-only biofeedback for six volunteers' respirations were same statistically from the paired t-test. The difference between the audiovisual and audio-only biofeedback methods was not significant. Audio-only biofeedback has many advantages, as patients do not require a mask and can quickly adapt to this method in the clinic.

  7. Ultrasonic speech translator and communications system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akerman, M.A.; Ayers, C.W.; Haynes, H.D.

    1996-07-23

    A wireless communication system undetectable by radio frequency methods for converting audio signals, including human voice, to electronic signals in the ultrasonic frequency range, transmitting the ultrasonic signal by way of acoustical pressure waves across a carrier medium, including gases, liquids, or solids, and reconverting the ultrasonic acoustical pressure waves back to the original audio signal. The ultrasonic speech translator and communication system includes an ultrasonic transmitting device and an ultrasonic receiving device. The ultrasonic transmitting device accepts as input an audio signal such as human voice input from a microphone or tape deck. The ultrasonic transmitting device frequency modulatesmore » an ultrasonic carrier signal with the audio signal producing a frequency modulated ultrasonic carrier signal, which is transmitted via acoustical pressure waves across a carrier medium such as gases, liquids or solids. The ultrasonic receiving device converts the frequency modulated ultrasonic acoustical pressure waves to a frequency modulated electronic signal, demodulates the audio signal from the ultrasonic carrier signal, and conditions the demodulated audio signal to reproduce the original audio signal at its output. 7 figs.« less

  8. Ultrasonic speech translator and communications system

    DOEpatents

    Akerman, M. Alfred; Ayers, Curtis W.; Haynes, Howard D.

    1996-01-01

    A wireless communication system undetectable by radio frequency methods for converting audio signals, including human voice, to electronic signals in the ultrasonic frequency range, transmitting the ultrasonic signal by way of acoustical pressure waves across a carrier medium, including gases, liquids, or solids, and reconverting the ultrasonic acoustical pressure waves back to the original audio signal. The ultrasonic speech translator and communication system (20) includes an ultrasonic transmitting device (100) and an ultrasonic receiving device (200). The ultrasonic transmitting device (100) accepts as input (115) an audio signal such as human voice input from a microphone (114) or tape deck. The ultrasonic transmitting device (100) frequency modulates an ultrasonic carrier signal with the audio signal producing a frequency modulated ultrasonic carrier signal, which is transmitted via acoustical pressure waves across a carrier medium such as gases, liquids or solids. The ultrasonic receiving device (200) converts the frequency modulated ultrasonic acoustical pressure waves to a frequency modulated electronic signal, demodulates the audio signal from the ultrasonic carrier signal, and conditions the demodulated audio signal to reproduce the original audio signal at its output (250).

  9. Nevasic audio program for the prevention of chemotherapy induced nausea and vomiting: A feasibility study using a randomized controlled trial design.

    PubMed

    Moradian, Saeed; Walshe, Catherine; Shahidsales, Soodabeh; Ghavam Nasiri, Mohammad Reza; Pilling, Mark; Molassiotis, Alexander

    2015-06-01

    Pharmacological therapy is only partially effective in preventing or treating chemotherapy induced nausea and vomiting (CINV). Therefore, exploring the complementary role of non-pharmacological approaches used in addition to pharmacological agents is important. Nevasic uses specially constructed audio signals hypothesized to generate an antiemetic reaction. The aim of this study was to examine the feasibility of conducting a randomized controlled trial (RCT) to evaluate the effectiveness of Nevasic to control CINV. A mixed methods design incorporating an RCT and focus group interviews. For the RCT, female breast cancer patients were randomized to receive either Nevasic plus usual care, music plus usual care, or usual care only. Data were analysed using descriptive statistics and linear mixed-effects models. Five focus group interviews were conducted to obtain participants' views regarding the acceptability of the interventions in the trial. 99 participants were recruited to the RCT and 15 participated in focus group interviews. Recruitment targets were achieved. Issues of Nevasic acceptability were highlighted as weaknesses of the program. This study did not detect any evidence for the effectiveness of Nevasic; however, the results showed statistically significant less use of anti-emetics (p = 0.003) and borderline non-significant improvement in quality of life (p = 0.06). Conducting a non-pharmacological intervention using such an audio program is feasible, although difficulties and limitations exist with its use. Further studies are required to investigate the effectiveness of Nevasic from perspectives such as anti-emetic use, as well as its overall effect on the levels of nausea and vomiting. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Innovative web-based multimedia curriculum improves cardiac examination competency of residents.

    PubMed

    Criley, Jasminka M; Keiner, Jennifer; Boker, John R; Criley, Stuart R; Warde, Carole M

    2008-03-01

    Proper diagnosis of cardiac disorders is a core competency of internists. Yet numerous studies have documented that the cardiac examination (CE) skills of physicians have declined compared with those of previous generations of physicians, attributed variously to inadequate exposure to cardiac patients and lack of skilled bedside teaching. With growing concerns about ensuring patient safety and quality of care, public and professional organizations are calling for a renewed emphasis on the teaching and evaluation of clinical skills in residency training. The objective of the study was to determine whether Web training improves CE competency, whether residents retain what they learn, and whether a Web-based curriculum plus clinical training is better than clinical training alone. Journal of Hospital Medicine 2008;3:124-133. (c) 2008 Society of Hospital Medicine. This was a controlled intervention study. The intervention group (34 internal and family medicine interns) participated in self-directed use of a Web-based tutorial and three 1-hour teaching sessions taught by a hospitalist. Twenty-five interns from the prior year served as controls. We assessed overall CE competency and 4 subcategories of CE competency: knowledge, audio skills, visual skills, and audio-visual integration. The over mean score of the intervention group significantly improved, from 54 to 66 (P = .002). This improvement was retained (63.5, P = .05). When compared with end-of-year controls, the intervention group had significantly higher end-of-year CE scores (57 vs. 63.5, P = .05), knowledge (P = .04), and audio skills (P = .01). At the end of the academic year, all improvements were retained (P

  11. Mining knowledge in noisy audio data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czyzewski, A.

    1996-12-31

    This paper demonstrates a KDD method applied to audio data analysis, particularly, it presents possibilities which result from replacing traditional methods of analysis and acoustic signal processing by KDD algorithms when restoring audio recordings affected by strong noise.

  12. Research into Teleconferencing

    DTIC Science & Technology

    1981-02-01

    Wichman (1970) found more cooperation under conditions of audio- visual communication than conditions of audio communication alone. Laplante (1971) found...was found for audio teleconferences. These results, taken with the results concerning group perfor- mance, seem to indicate that visual communication gives

  13. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  14. Comparison of three orientation and mobility aids for individuals with blindness: Verbal description, audio-tactile map and audio-haptic map.

    PubMed

    Papadopoulos, Konstantinos; Koustriava, Eleni; Koukourikos, Panagiotis; Kartasidou, Lefkothea; Barouti, Marialena; Varveris, Asimis; Misiou, Marina; Zacharogeorga, Timoclia; Anastasiadis, Theocharis

    2017-01-01

    Disorientation and inability of wayfinding are phenomena with a great frequency for individuals with visual impairments during the process of travelling novel environments. Orientation and mobility aids could suggest important tools for the preparation of a more secure and cognitively mapped travelling. The aim of the present study was to examine if spatial knowledge structured after an individual with blindness had studied the map of an urban area that was delivered through a verbal description, an audio-tactile map or an audio-haptic map, could be used for detecting in the area specific points of interest. The effectiveness of the three aids with reference to each other was also examined. The results of the present study highlight the effectiveness of the audio-tactile and the audio-haptic maps as orientation and mobility aids, especially when these are compared to verbal descriptions.

  15. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  16. A high efficiency PWM CMOS class-D audio power amplifier

    NASA Astrophysics Data System (ADS)

    Zhangming, Zhu; Lianxi, Liu; Yintang, Yang; Han, Lei

    2009-02-01

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 μm CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 μA. The active area of the class-D audio power amplifier is about 1.47 × 1.52 mm2. With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  17. Virtual Ultrasound Guidance for Inexperienced Operators

    NASA Technical Reports Server (NTRS)

    Caine, Timothy; Martin, David

    2012-01-01

    Medical ultrasound or echocardiographic studies are highly operator-dependent and generally require lengthy training and internship to perfect. To obtain quality echocardiographic images in remote environments, such as on-orbit, remote guidance of studies has been employed. This technique involves minimal training for the user, coupled with remote guidance from an expert. When real-time communication or expert guidance is not available, a more autonomous system of guiding an inexperienced operator through an ultrasound study is needed. One example would be missions beyond low Earth orbit in which the time delay inherent with communication will make remote guidance impractical. The Virtual Ultrasound Guidance system is a combination of hardware and software. The hardware portion includes, but is not limited to, video glasses that allow hands-free, full-screen viewing. The glasses also allow the operator a substantial field of view below the glasses to view and operate the ultrasound system. The software is a comprehensive video program designed to guide an inexperienced operator through a detailed ultrasound or echocardiographic study without extensive training or guidance from the ground. The program contains a detailed description using video and audio to demonstrate equipment controls, ergonomics of scanning, study protocol, and scanning guidance, including recovery from sub-optimal images. The components used in the initial validation of the system include an Apple iPod Classic third-generation as the video source, and Myvue video glasses. Initially, the program prompts the operator to power-up the ultrasound and position the patient. The operator would put on the video glasses and attach them to the video source. After turning on both devices and the ultrasound system, the audio-video guidance would then instruct on patient positioning and scanning techniques. A detailed scanning protocol follows with descriptions and reference video of each view along with advice on technique. The program also instructs the operator regarding the types of images to store and how to overcome pitfalls in scanning. Images can be forwarded to the ground or other site when convenient. Following study completion, the video glasses, video source, and ultrasound system are powered down and stored. Virtually any equipment that can play back video can be used to play back the program. This includes a DVD player, personal computer, and some MP3 players.

  18. Music and speech listening enhance the recovery of early sensory processing after stroke.

    PubMed

    Särkämö, Teppo; Pihko, Elina; Laitinen, Sari; Forsblom, Anita; Soinila, Seppo; Mikkonen, Mikko; Autti, Taina; Silvennoinen, Heli M; Erkkilä, Jaakko; Laine, Matti; Peretz, Isabelle; Hietanen, Marja; Tervaniemi, Mari

    2010-12-01

    Our surrounding auditory environment has a dramatic influence on the development of basic auditory and cognitive skills, but little is known about how it influences the recovery of these skills after neural damage. Here, we studied the long-term effects of daily music and speech listening on auditory sensory memory after middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 patients who had middle cerebral artery stroke were randomly assigned to a music listening group, an audio book listening group, or a control group. Auditory sensory memory, as indexed by the magnetic MMN (MMNm) response to changes in sound frequency and duration, was measured 1 week (baseline), 3 months, and 6 months after the stroke with whole-head magnetoencephalography recordings. Fifty-four patients completed the study. Results showed that the amplitude of the frequency MMNm increased significantly more in both music and audio book groups than in the control group during the 6-month poststroke period. In contrast, the duration MMNm amplitude increased more in the audio book group than in the other groups. Moreover, changes in the frequency MMNm amplitude correlated significantly with the behavioral improvement of verbal memory and focused attention induced by music listening. These findings demonstrate that merely listening to music and speech after neural damage can induce long-term plastic changes in early sensory processing, which, in turn, may facilitate the recovery of higher cognitive functions. The neural mechanisms potentially underlying this effect are discussed.

  19. Musical stairs: the impact of audio feedback during stair-climbing physical therapies for children.

    PubMed

    Khan, Ajmal; Biddiss, Elaine

    2015-05-01

    Enhanced biofeedback during rehabilitation therapies has the potential to provide a therapeutic environment optimally designed for neuroplasticity. This study investigates the impact of audio feedback on the achievement of a targeted therapeutic goal, namely, use of reciprocal steps. Stair-climbing therapy sessions conducted with and without audio feedback were compared in a randomized AB/BA cross-over study design. Seventeen children, aged 4-7 years, with various diagnoses participated. Reports from the participants, therapists, and a blinded observer were collected to evaluate achievement of the therapeutic goal, motivation and enjoyment during the therapy sessions. Audio feedback resulted in a 5.7% increase (p = 0.007) in reciprocal steps. Levels of participant enjoyment increased significantly (p = 0.031) and motivation was reported by child participants and therapists to be greater when audio feedback was provided. These positive results indicate that audio feedback may influence the achievement of therapeutic goals and promote enjoyment and motivation in young patients engaged in rehabilitation therapies. This study lays the groundwork for future research to determine the long term effects of audio feedback on functional outcomes of therapy. Stair-climbing is an important mobility skill for promoting independence and activities of daily life and is a key component of rehabilitation therapies for physically disabled children. Provision of audio feedback during stair-climbing therapies for young children may increase their achievement of a targeted therapeutic goal (i.e., use of reciprocal steps). Children's motivation and enjoyment of the stair-climbing therapy was enhanced when audio feedback was provided.

  20. Ultraino: An Open Phased-Array System for Narrowband Airborne Ultrasound Transmission.

    PubMed

    Marzo, Asier; Corkett, Tom; Drinkwater, Bruce W

    2018-01-01

    Modern ultrasonic phased-array controllers are electronic systems capable of delaying the transmitted or received signals of multiple transducers. Configurable transmit-receive array systems, capable of electronic steering and shaping of the beam in near real-time, are available commercially, for example, for medical imaging. However, emerging applications, such as ultrasonic haptics, parametric audio, or ultrasonic levitation, require only a small subset of the capabilities provided by the existing controllers. To meet this need, we present Ultraino, a modular, inexpensive, and open platform that provides hardware, software, and example applications specifically aimed at controlling the transmission of narrowband airborne ultrasound. Our system is composed of software, driver boards, and arrays that enable users to quickly and efficiently perform research in various emerging applications. The software can be used to define array geometries, simulate the acoustic field in real time, and control the connected driver boards. The driver board design is based on an Arduino Mega and can control 64 channels with a square wave of up to 17 Vpp and /5 phase resolution. Multiple boards can be chained together to increase the number of channels. The 40-kHz arrays with flat and spherical geometries are demonstrated for parametric audio generation, acoustic levitation, and haptic feedback.

  1. When Pictures Waste a Thousand Words: Analysis of the 2009 H1N1 Pandemic on Television News

    PubMed Central

    Luth, Westerly; Jardine, Cindy; Bubela, Tania

    2013-01-01

    Objectives Effective communication by public health agencies during a pandemic promotes the adoption of recommended health behaviours. However, more information is not always the solution. Rather, attention must be paid to how information is communicated. Our study examines the television news, which combines video and audio content. We analyse (1) the content of television news about the H1N1 pandemic and vaccination campaign in Alberta, Canada; (2) the extent to which television news content conveyed key public health agency messages; (3) the extent of discrepancies in audio versus visual content. Methods We searched for “swine flu” and “H1N1” in local English news broadcasts from the CTV online video archive. We coded the audio and visual content of 47 news clips during the peak period of coverage from April to November 2009 and identified discrepancies between audio and visual content. Results The dominant themes on CTV news were the vaccination rollout, vaccine shortages, long line-ups (queues) at vaccination clinics and defensive responses by public health officials. There were discrepancies in the priority groups identified by the provincial health agency (Alberta Health and Wellness) and television news coverage as well as discrepancies between audio and visual content of news clips. Public health officials were presented in official settings rather than as public health practitioners. Conclusion The news footage did not match the main public health messages about risk levels and priority groups. Public health agencies lost control of their message as the media focused on failures in the rollout of the vaccination campaign. Spokespeople can enhance their local credibility by emphasizing their role as public health practitioners. Public health agencies need to learn from the H1N1 pandemic so that future television communications do not add to public confusion, demonstrate bureaucratic ineffectiveness and contribute to low vaccination rates. PMID:23691150

  2. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  3. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  4. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  5. 7 CFR 47.14 - Prehearing conferences.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... determines that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent.... If the examiner determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the examiner...

  6. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  7. 7 CFR 47.16 - Depositions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... which the deposition is to be conducted (telephone, audio-visual telecommunication, or by personal...) The place of the deposition; (iii) The manner of the deposition (telephone, audio-visual... shall be conducted in the manner (telephone, audio-visual telecommunication, or personal attendance of...

  8. 7 CFR 1.167 - Conference.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...

  9. Instrumental Landing Using Audio Indication

    NASA Astrophysics Data System (ADS)

    Burlak, E. A.; Nabatchikov, A. M.; Korsun, O. N.

    2018-02-01

    The paper proposes an audio indication method for presenting to a pilot the information regarding the relative positions of an aircraft in the tasks of precision piloting. The implementation of the method is presented, the use of such parameters of audio signal as loudness, frequency and modulation are discussed. To confirm the operability of the audio indication channel the experiments using modern aircraft simulation facility were carried out. The simulated performed the instrument landing using the proposed audio method to indicate the aircraft deviations in relation to the slide path. The results proved compatible with the simulated instrumental landings using the traditional glidescope pointers. It inspires to develop the method in order to solve other precision piloting tasks.

  10. Realization of guitar audio effects using methods of digital signal processing

    NASA Astrophysics Data System (ADS)

    Buś, Szymon; Jedrzejewski, Konrad

    2015-09-01

    The paper is devoted to studies on possibilities of realization of guitar audio effects by means of methods of digital signal processing. As a result of research, some selected audio effects corresponding to the specifics of guitar sound were realized as the real-time system called Digital Guitar Multi-effect. Before implementation in the system, the selected effects were investigated using the dedicated application with a graphical user interface created in Matlab environment. In the second stage, the real-time system based on a microcontroller and an audio codec was designed and realized. The system is designed to perform audio effects on the output signal of an electric guitar.

  11. Power saver circuit for audio/visual signal unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Right, R. W.

    1985-02-12

    A combined audio and visual signal unit with the audio and visual components actuated alternately and powered over a single cable pair in such a manner that only one of the audio and visual components is drawing power from the power supply at any given instant. Thus, the power supply is never called upon to provide more energy than that drawn by the one of the components having the greater power requirement. This is particularly advantageous when several combined audio and visual signal units are coupled in parallel on one cable pair. Typically, the signal unit may comprise a hornmore » and a strobe light for a fire alarm signalling system.« less

  12. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  13. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    PubMed Central

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718

  14. A description of communication patterns during CPR in ICU.

    PubMed

    Taylor, Katherine L; Ferri, Susan; Yavorska, Tatyana; Everett, Tobias; Parshuram, Christopher

    2014-10-01

    Deficiencies in communication in health care are a common source of medical error. Preferred communication patterns are a component of resuscitation teaching. We audio-recorded resuscitations in a mixed paediatric medical and surgical ICU to describe communication. In the intensive care unit, resuscitation events were prospectively audio-recorded by two trained observers (using handheld recorders). Recordings were transcribed and anonymised within 24h. We grouped utterances regarding the same subject matter from beginning (irrespective of response) as a communication epoch. For each epoch, we describe the initiator, audience and content of message. Teamwork behaviours were described using Anesthesia Nontechnical Skills framework (ANTS), a behavioural marker system for crisis-resource management. Consent rates from staff were 139/140 (99%) and parents were 67/92 (73%). We analysed 36min 57s of audio dialogue from 4 cardiac arrest events in 363h of prospective screening. There were 180 communication epochs (1 every 12s): 100 (56%) from the team-leader and 80 (44%) from non-team-leader(s). Team-leader epochs were to give or confirm orders or assert authority (61%), clarify patient history (14%) and provide clinical updates (25%). Non-team-leader epochs were more often directed to the team (65%) than the team-leader (35%). Audio-recordings provided information for 80% of the ANTS component elements with scores of 2-4. Communication epochs were frequent, most from the team-leader. We identified an 'outer loop' of communication between team members not including the team-leader, responsible for 44% of all communication events. We discuss difficulties in this research methodology. Future work includes exploring the process of the 'outer loop' by resuscitation team members to evaluate the optimal balance between single leader and team suggestions, the content of the outer loop discussions and in-event communication strategies to improve outcomes. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Passing the Baton: An Experimental Study of Shift Handover

    NASA Technical Reports Server (NTRS)

    Parke, Bonny; Hobbs, Alan; Kanki, Barbara

    2010-01-01

    Shift handovers occur in many safety-critical environments, including aviation maintenance, medicine, air traffic control, and mission control for space shuttle and space station operations. Shift handovers are associated with increased risk of communication failures and human error. In dynamic industries, errors and accidents occur disproportionately after shift handover. Typical shift handovers involve transferring information from an outgoing shift to an incoming shift via written logs, or in some cases, face-to-face briefings. The current study explores the possibility of improving written communication with the support modalities of audio and video recordings, as well as face-to-face briefings. Fifty participants participated in an experimental task which mimicked some of the critical challenges involved in transferring information between shifts in industrial settings. All three support modalities, face-to-face, video, and audio recordings, reduced task errors significantly over written communication alone. The support modality most preferred by participants was face-to-face communication; the least preferred was written communication alone.

  16. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    NASA Astrophysics Data System (ADS)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.

  17. Error-related negativity in the skilled brain of pianists reveals motor simulation.

    PubMed

    Proverbio, Alice Mado; Cozzi, Matteo; Orlandi, Andrea; Carminati, Manuel

    2017-03-27

    Evidences have been provided of a crucial role of multimodal audio-visuomotor processing in subserving the musical ability. In this paper we investigated whether musical audiovisual stimulation might trigger the activation of motor information in the brain of professional pianists, due to the presence of permanent gestures/sound associations. At this aim EEG was recorded in 24 pianists and naive participants engaged in the detection of rare targets while watching hundreds of video clips showing a pair of hands in the act of playing, along with a compatible or incompatible piano soundtrack. Hands size and apparent distance allowed self-ownership and agency illusions, and therefore motor simulation. Event-related potentials (ERPs) and relative source reconstruction showed the presence of an Error-related negativity (ERN) to incongruent trials at anterior frontal scalp sites, only in pianists, with no difference in naïve participants. ERN was mostly explained by an anterior cingulate cortex (ACC) source. Other sources included "hands" IT regions, the superior temporal gyrus (STG) involved in conjoined auditory and visuomotor processing, SMA and cerebellum (representing and controlling motor subroutines), and regions involved in body parts representation (somatosensory cortex, uncus, cuneus and precuneus). The findings demonstrate that instrument-specific audiovisual stimulation is able to trigger error shooting and correction neural responses via motor resonance and mirroring, being a possible aid in learning and rehabilitation. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. 7 CFR 1.148 - Depositions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (telephone, audio-visual telecommunication, or personal attendance of those who are to participate in the... that conducting the deposition by audio-visual telecommunication: (i) Is necessary to prevent prejudice... determines that a deposition conducted by audio-visual telecommunication would measurably increase the United...

  19. 47 CFR Figure 2 to Subpart N of... - Typical Audio Wave

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Typical Audio Wave 2 Figure 2 to Subpart N of Part 2 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO... Audio Wave EC03JN91.006 ...

  20. 9 CFR 202.112 - Rule 12: Oral hearing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... hearing shall be conducted by audio-visual telecommunication unless the presiding officer determines that... hearing by audio-visual telecommunication. If the presiding officer determines that a hearing conducted by audio-visual telecommunication would measurably increase the United States Department of Agriculture's...

Top