Audiovisual quality evaluation of low-bitrate video
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Faller, Christof
2005-03-01
Audiovisual quality assessment is a relatively unexplored topic. We designed subjective experiments for audio, video, and audiovisual quality using content and encoding parameters representative of video for mobile applications. Our focus were the MPEG-4 AVC (a.k.a. H.264) and AAC coding standards. Our goals in this study are two-fold: we want to understand the interactions between audio and video in terms of perceived audiovisual quality, and we use the subjective data to evaluate the prediction performance of our non-reference video and audio quality metrics.
New Integrated Video and Graphics Technology: Digital Video Interactive.
ERIC Educational Resources Information Center
Optical Information Systems, 1987
1987-01-01
Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)
Converting laserdisc video to digital video: a demonstration project using brain animations.
Jao, C S; Hier, D B; Brint, S U
1995-01-01
Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.
Henry, Stephen G; Penner, Louis A; Eggly, Susan
2017-06-01
To investigate associations between ratings of "thin slices" from recorded clinic visits and perceived patient-centeredness; to compare ratings from video recordings (sound and images) versus audio recordings (sound only). We analyzed 133 video-recorded primary care visits and patient perceptions of patient-centeredness. Observers rated thirty-second thin slices on variables assessing patient affect, physician affect, and patient-physician rapport. Video and audio ratings were collected independently. In multivariable analyses, ratings of physician positive affect (but not patient positive affect) were significantly positively associated with perceived patient-centeredness using both video and audio thin slices. Patient-physician rapport was significantly positively associated with perceived patient-centeredness using audio, but not video thin slices. Ratings from video and audio thin slices were highly correlated and had similar underlying factor structures. Physician (but not patient) positive affect is significantly associated with perceptions of patient-centeredness and can be measured reliably using either video or audio thin slices. Additional studies are needed to determine whether ratings of patient-physician rapport are associated with perceived patient-centeredness. Observer ratings of physician positive affect have a meaningful positive association with patients' perceptions of patient-centeredness. Patients appear to be highly attuned to physician positive affect during patient-physician interactions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Report on Distance Learning Technologies.
1995-09-01
26 cities. The CSX system includes full-motion video, animations , audio, and interactive examples and testing to teach the use of a new computer...video. The change to all-digital media now permits the use of full-motion video, animation , and audio on networks. It is possible to have independent...is possible to download entire multimedia presentations from the network. To date there is not a great deal known about teaching courses using the
ERIC Educational Resources Information Center
Desmarais, Norman
1991-01-01
Reviews current developments in multimedia computing for both the business and consumer markets, including interactive multimedia players; compact disc-interactive (CD-I), including levels of audio quality, various video specifications and visual effects, and software; digital video interactive (DVI); and multimedia personal computers. (LRW)
ERIC Educational Resources Information Center
Chen, Ching-chih
1991-01-01
Describes compact disc interactive (CD-I) as a multimedia home entertainment system that combines audio, visual, text, graphic, and interactive capabilities. Full-screen video and full-motion video (FMV) are explained, hardware for FMV decoding is described, software is briefly discussed, and CD-I titles planned for future production are listed.…
Digital Audio: A Sound Design Element.
ERIC Educational Resources Information Center
Barron, Ann; Varnadoe, Susan
1992-01-01
Discussion of incorporating audio into videodiscs for multimedia educational applications highlights a project developed for the Navy that used digital audio in an interactive video delivery system (IVDS) for training sonar operators. Storage constraints with videodiscs are explained, design requirements for the IVDS are described, and production…
ERIC Educational Resources Information Center
Grossman, Ruth B
2015-01-01
We form first impressions of many traits based on very short interactions. This study examines whether typical adults judge children with high-functioning autism to be more socially awkward than their typically developing peers based on very brief exposure to still images, audio-visual, video-only, or audio-only information. We used video and…
Video as a technology for interpersonal communications: a new perspective
NASA Astrophysics Data System (ADS)
Whittaker, Steve
1995-03-01
Some of the most challenging multimedia applications have involved real- time conferencing, using audio and video to support interpersonal communication. Here we re-examine assumptions about the role, importance and implementation of video information in such systems. Rather than focussing on novel technologies, we present evaluation data relevant to both the classes of real-time multimedia applications we should develop and their design and implementation. Evaluations of videoconferencing systems show that previous work has overestimated the importance of video at the expense of audio. This has strong implications for the implementation of bandwidth allocation and synchronization. Furthermore our recent studies of workplace interaction show that prior work has neglected another potentially vital function of visual information: in assessing the communication availability of others. In this new class of application, rather than providing a supplement to audio information, visual information is used to promote the opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we examine a different class of application 'video-as-data', where the video image is used to transmit information about the work objects themselves, rather than information about interactants.
Exclusively visual analysis of classroom group interactions
NASA Astrophysics Data System (ADS)
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-12-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Comparing Audio and Video Data for Rating Communication
Williams, Kristine; Herman, Ruth; Bontempo, Daniel
2013-01-01
Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with ICC (2,1) for audio = .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio recorded data should be evaluated in designing studies evaluating nursing care. PMID:23579475
Comparing audio and video data for rating communication.
Williams, Kristine; Herman, Ruth; Bontempo, Daniel
2013-09-01
Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with Interclass Correlation Coefficient (ICC) (2,1) for audio .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio-recorded data should be evaluated in designing studies evaluating nursing care.
The Impact of Video Review on Supervisory Conferencing
ERIC Educational Resources Information Center
Baecher, Laura; McCormack, Bede
2015-01-01
This study investigated how video-based observation may alter the nature of post-observation talk between supervisors and teacher candidates. Audio-recorded post-observation conversations were coded using a conversation analysis framework and interpreted through the lens of interactional sociology. Findings suggest that video-based observations…
Instructional Design Issues for Current and Future Interactive Video Media.
ERIC Educational Resources Information Center
Hadley, James A.; Bentley, Joanne; Christiansen, Todd P.
2003-01-01
Addresses some of the issues that instructional designers will face in the near future and ways to deal with new instructional affordances and constraint, including: Menu and Audio, Video, Subpicture Interleaved, Streamlining Digital Media (MAVSI-SDM); three-dimensional flowcharting; designing multi-faceted storyboards and scripts; managing video,…
Diagnostic accuracy of sleep bruxism scoring in absence of audio-video recording: a pilot study.
Carra, Maria Clotilde; Huynh, Nelly; Lavigne, Gilles J
2015-03-01
Based on the most recent polysomnographic (PSG) research diagnostic criteria, sleep bruxism is diagnosed when >2 rhythmic masticatory muscle activity (RMMA)/h of sleep are scored on the masseter and/or temporalis muscles. These criteria have not yet been validated for portable PSG systems. This pilot study aimed to assess the diagnostic accuracy of scoring sleep bruxism in absence of audio-video recordings. Ten subjects (mean age 24.7 ± 2.2) with a clinical diagnosis of sleep bruxism spent one night in the sleep laboratory. PSG were performed with a portable system (type 2) while audio-video was recorded. Sleep studies were scored by the same examiner three times: (1) without, (2) with, and (3) without audio-video in order to test the intra-scoring and intra-examiner reliability for RMMA scoring. The RMMA event-by-event concordance rate between scoring without audio-video and with audio-video was 68.3 %. Overall, the RMMA index was overestimated by 23.8 % without audio-video. However, the intra-class correlation coefficient (ICC) between scorings with and without audio-video was good (ICC = 0.91; p < 0.001); the intra-examiner reliability was high (ICC = 0.97; p < 0.001). The clinical diagnosis of sleep bruxism was confirmed in 8/10 subjects based on scoring without audio-video and in 6/10 subjects with audio-video. Although the absence of audio-video recording, the diagnostic accuracy of assessing RMMA with portable PSG systems appeared to remain good, supporting their use for both research and clinical purposes. However, the risk of moderate overestimation in absence of audio-video must be taken into account.
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
Audio-visual interactions in environment assessment.
Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata
2015-08-01
The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.
Storyboard Development for Interactive Multimedia Training.
ERIC Educational Resources Information Center
Orr, Kay L.; And Others
1994-01-01
Discusses procedures for storyboard development and provides guidelines for designing interactive multimedia courseware, including interactivity, learner control, feedback, visual elements, motion video, graphics/animation, text, audio, and programming. A topical bibliography that lists 98 items is included. (LRW)
ERIC Educational Resources Information Center
Rush, S. Craig
2014-01-01
This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…
Use of recorded interactive seminars in orthodontic distance education.
Miller, Kenneth T; Hannum, Wallace M; Morley, Tarrl; Proffit, William R
2007-09-01
Our objective was evaluate the effectiveness and acceptability of 3 methods of instructor interaction during distance learning with prerecorded seminars in orthodontic residencies and continuing education. After residents at 3 schools (Sydney, Australia; Winnipeg, Manitoba, Canada; and Manchester, United Kingdom) viewed a recorded interactive seminar, they discussed its content with the seminar leader at a distance via video conferencing, audio-only interaction by telephone, and Internet chat with Net Meeting software (Microsoft, Bellevue, Wash). The residents then completed evaluations containing both closed- and open-ended questions. In addition, attendees at the Iranian Orthodontic Congress also viewed a recorded seminar, had questions answered via an interpreter in a video conference, and completed summary evaluations. Video conferencing received the highest ratings and was never cited as the least favorite method of interaction. Telephone interaction was a close second in mean scores, and Internet chat was a distant third. All residents stated that they would like to be taught through distance education again. However, the Iranian orthodontists were less enthusiastic. Distance learning based on observation of recorded seminars and follow-up interaction is an acceptable method of instruction that can allow residents and practicing orthodontists access to various materials and experts, and perhaps help to ease the strains of current faculty shortages. More data are needed to determine whether video conferencing is worth the additional cost and complexity over audio-only interaction.
MedlinePlus FAQ: Is audio description available for videos on MedlinePlus?
... audiodescription.html Question: Is audio description available for videos on MedlinePlus? To use the sharing features on ... page, please enable JavaScript. Answer: Audio description of videos helps make the content of videos accessible to ...
ERIC Educational Resources Information Center
Sayre, Scott Alan
The purpose of this study was to develop and validate a computer-based system that would allow interactive video developers to integrate and manage the design components prior to production. These components of an interactive video (IVD) program include visual information in a variety of formats, audio information, and instructional techniques,…
Interactive Distance Education: Improvisation Helps Bridge the Gap.
ERIC Educational Resources Information Center
Yucha, Carolyn B.
1996-01-01
Describes distance learning through the use of interactive duplex video and audio. Improvisation techniques force active participation by students. Addresses faculty concerns about the interrelationships between instructor and students and among students in distance education environments. (MKR)
Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction
Escalera, Sergio; Baró, Xavier; Vitrià, Jordi; Radeva, Petia; Raducanu, Bogdan
2012-01-01
Social interactions are a very important component in people’s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links’ weights are a measure of the “influence” a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network. PMID:22438733
Challenges in Transcribing Multimodal Data: A Case Study
ERIC Educational Resources Information Center
Helm, Francesca; Dooly, Melinda
2017-01-01
Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2011 CFR
2011-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2012 CFR
2012-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2013 CFR
2013-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2010 CFR
2010-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2014 CFR
2014-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
A digital audio/video interleaving system. [for Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Richards, R. W.
1978-01-01
A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.
Real time simulation using position sensing
NASA Technical Reports Server (NTRS)
Isbell, William B. (Inventor); Taylor, Jason A. (Inventor); Studor, George F. (Inventor); Womack, Robert W. (Inventor); Hilferty, Michael F. (Inventor); Bacon, Bruce R. (Inventor)
2000-01-01
An interactive exercise system including exercise equipment having a resistance system, a speed sensor, a controller that varies the resistance setting of the exercise equipment, and a playback device for playing pre-recorded video and audio. The controller, operating in conjunction with speed information from the speed sensor and terrain information from media table files, dynamically varies the resistance setting of the exercise equipment in order to simulate varying degrees of difficulty while the playback device concurrently plays back the video and audio to create the simulation that the user is exercising in a natural setting such as a real-world exercise course.
Code of Federal Regulations, 2012 CFR
2012-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
Code of Federal Regulations, 2013 CFR
2013-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
Design of batch audio/video conversion platform based on JavaEE
NASA Astrophysics Data System (ADS)
Cui, Yansong; Jiang, Lianpin
2018-03-01
With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.
ERIC Educational Resources Information Center
Bergman, Daniel
2015-01-01
This study examined the effects of audio and video self-recording on preservice teachers' written reflections. Participants (n = 201) came from a secondary teaching methods course and its school-based (clinical) fieldwork. The audio group (n[subscript A] = 106) used audio recorders to monitor their teaching in fieldwork placements; the video group…
Web Audio/Video Streaming Tool
NASA Technical Reports Server (NTRS)
Guruvadoo, Eranna K.
2003-01-01
In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.
A scheme for racquet sports video analysis with the combination of audio-visual information
NASA Astrophysics Data System (ADS)
Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua
2005-07-01
As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.
Attention to and Memory for Audio and Video Information in Television Scenes.
ERIC Educational Resources Information Center
Basil, Michael D.
A study investigated whether selective attention to a particular television modality resulted in different levels of attention to and memory for each modality. Two independent variables manipulated selective attention. These were the semantic channel (audio or video) and viewers' instructed focus (audio or video). These variables were fully…
ERIC Educational Resources Information Center
Ludlow, Barbara L.; Foshay, John B.; Duff, Michael C.
Video presentations of teaching episodes in home, school, and community settings and audio recordings of parents' and professionals' views can be important adjuncts to personnel preparation in special education. This paper describes instructional applications of digital media and outlines steps in producing audio and video segments. Digital audio…
An Investigation of Technological Innovation: Interactive Television.
ERIC Educational Resources Information Center
Robinson, Rhonda S.
A 5-year case study was implemented to evaluate the two-way Carroll Instructional Television Consortium, which utilizes a cable television network serving four school districts in Illinois. This network permits simultaneous video and audio interactive communication among four high schools. The naturalistic inquiry method employed included…
Interactive Educational Multimedia: Coping with the Need for Increasing Data Storage.
ERIC Educational Resources Information Center
Malhotra, Yogesh; Erickson, Ranel E.
1994-01-01
Discusses the storage requirements for data forms used in interactive multimedia education and presently available storage devices. Highlights include characteristics of educational multimedia; factors determining data storage requirements; storage devices for video and audio needs; laserdiscs and videodiscs; compact discs; magneto-optical drives;…
ENERGY STAR Certified Audio Video
Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of May 1, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/index.cfm?c=audio_dvd.pr_crit_audio_dvd
News video story segmentation method using fusion of audio-visual features
NASA Astrophysics Data System (ADS)
Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang
2007-11-01
News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.
Use of Video and Audio Texts in EFL Listening Test
ERIC Educational Resources Information Center
Basal, Ahmet; Gülözer, Kaine; Demir, Ibrahim
2015-01-01
The study aims to discover whether audio or video modality in a listening test is more beneficial to test takers. In this study, the posttest-only control group design was utilized and quantitative data were collected in order to measure participant performances concerning two types of modality (audio or video) in a listening test. The…
NASA Technical Reports Server (NTRS)
1974-01-01
A descriptive handbook for the audio/CTE splitter/interleaver (RCA part No. 8673734-502) was presented. This unit is designed to perform two major functions: extract audio and time data from an interleaved video/audio signal (splitter section), and provide a test interleaved video/audio/CTE signal for the system (interleaver section). It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.
Influence of audio triggered emotional attention on video perception
NASA Astrophysics Data System (ADS)
Torres, Freddy; Kalva, Hari
2014-02-01
Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.
ERIC Educational Resources Information Center
Reddy, Christopher
2014-01-01
Interactive television is a type of distance education that uses streaming audio and video technology for real-time student-teacher interaction. Here, I discuss the design and logistics for developing a high school laboratory-based science course taught to students at a distance using interactive technologies. The goal is to share a successful…
Effect of Audio vs. Video on Aural Discrimination of Vowels
ERIC Educational Resources Information Center
McCrocklin, Shannon
2012-01-01
Despite the growing use of media in the classroom, the effects of using of audio versus video in pronunciation teaching has been largely ignored. To analyze the impact of the use of audio or video training on aural discrimination of vowels, 61 participants (all students at a large American university) took a pre-test followed by two training…
Reasons to Rethink the Use of Audio and Video Lectures in Online Courses
ERIC Educational Resources Information Center
Stetz, Thomas A.; Bauman, Antonina A.
2013-01-01
Recent technological developments allow any instructor to create audio and video lectures for the use in online classes. However, it is questionable if it is worth the time and effort that faculty put into preparing those lectures. This paper presents thirteen factors that should be considered before preparing and using audio and video lectures in…
Streaming Audio and Video: New Challenges and Opportunities for Museums.
ERIC Educational Resources Information Center
Spadaccini, Jim
Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…
The effect of context and audio-visual modality on emotions elicited by a musical performance
Coutinho, Eduardo; Scherer, Klaus R.
2016-01-01
In this work, we compared emotions induced by the same performance of Schubert Lieder during a live concert and in a laboratory viewing/listening setting to determine the extent to which laboratory research on affective reactions to music approximates real listening conditions in dedicated performances. We measured emotions experienced by volunteer members of an audience that attended a Lieder recital in a church (Context 1) and emotional reactions to an audio-video-recording of the same performance in a university lecture hall (Context 2). Three groups of participants were exposed to three presentation versions in Context 2: (1) an audio-visual recording, (2) an audio-only recording, and (3) a video-only recording. Participants achieved statistically higher levels of emotional convergence in the live performance than in the laboratory context, and the experience of particular emotions was determined by complex interactions between auditory and visual cues in the performance. This study demonstrates the contribution of the performance setting and the performers’ appearance and nonverbal expression to emotion induction by music, encouraging further systematic research into the factors involved. PMID:28781419
Summarizing Audiovisual Contents of a Video Program
NASA Astrophysics Data System (ADS)
Gong, Yihong
2003-12-01
In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.
Low-delay predictive audio coding for the HIVITS HDTV codec
NASA Astrophysics Data System (ADS)
McParland, A. K.; Gilchrist, N. H. C.
1995-01-01
The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.
Eye movements while viewing narrated, captioned, and silent videos
Ross, Nicholas M.; Kowler, Eileen
2013-01-01
Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357
Enhancing E-Learning with Media-Rich Content and Interactions
ERIC Educational Resources Information Center
Caladine, Richard
2008-01-01
Online learning is transcending from the text-rich educational experience of the past to a video- and audio-rich learning transformation. The greater levels of media-rich content and media-rich interaction that are currently prevalent in online leisure experiences will help to increase e-learning's future efficiency and effectiveness. "Enhancing…
Model-Driven Development of Interactive Multimedia Applications with MML
NASA Astrophysics Data System (ADS)
Pleuss, Andreas; Hussmann, Heinrich
There is an increasing demand for high-quality interactive applications which combine complex application logic with a sophisticated user interface, making use of individual media objects like graphics, animations, 3D graphics, audio or video. Their development is still challenging as it requires the integration of software design, user interface design, and media design.
Message Modality and Source Credibility Can Interact to Affect Argument Processing.
ERIC Educational Resources Information Center
Booth-Butterfield, Steve; Gutowski, Christine
1993-01-01
Extends previous modality and source cue studies by manipulating argument quality. Randomly assigned college students by class to an argument quality by source attribute by modality factorial experiment. Finds the print mode produces only argument main effects, and audio and video modes produce argument by cue interactions. Finds data inconsistent…
The Two-Way Language Bridge: Co-Constructing Bilingual Language Learning Opportunities
ERIC Educational Resources Information Center
Martin-Beltran, Melinda
2010-01-01
Using a sociocultural theoretical lens, this study examines the nature of student interactions in a dual immersion school to analyze affordances for bilingual language learning, language exchange, and co-construction of language expertise. This article focuses on data from audio- and video-recorded interactions of fifth-grade students engaged in…
Interactive Media Instruction: Webcasting College Radio and Television Programs.
ERIC Educational Resources Information Center
Reppert, James E.
Recent innovations involving audio and video on the Internet allow for more creativity and flexibility in the broadcast education classroom. Despite the fact that Southern Arkansas University (SAU) has a modest budget allocated for its broadcast journalism program, significant interactive changes have taken place. At the outset of the fall 1999…
Oral Computer-Mediated Interaction between L2 Learners: It's about Time!
ERIC Educational Resources Information Center
Yanguas, Inigo
2010-01-01
This study explores task-based, synchronous oral computer-mediated communication (CMC) among intermediate-level learners of Spanish. In particular, this paper examines (a) how learners in video and audio CMC groups negotiate for meaning during task-based interaction, (b) possible differences between both oral CMC modes and traditional face-to-face…
Using Music to Communicate Geoscience in Films, Videos and Interactive Games
NASA Astrophysics Data System (ADS)
Kerlow, I.
2017-12-01
Music is a powerful storytelling device and an essential component in today's movies and interactive games. Communicating Earth science can be enhanced and focused with the proper use of a musical score, particularly in the context of documentary films, television programs, interactive games and museum installations. This presentation presents five simple professional techniques to integrate music, visuals and voice-over narration into a single cohesive story that is emotionally engaging. It also presents five practical tips to improve the success of a musical collaboration. The concepts in question are illustrated with practical audio and video examples from real science projects.
Video conference quality assessment based on cooperative sensing of video and audio
NASA Astrophysics Data System (ADS)
Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu
2015-12-01
This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.
Audio-Visual Perception System for a Humanoid Robotic Head
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro
2014-01-01
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593
Application discussion of source coding standard in voyage data recorder
NASA Astrophysics Data System (ADS)
Zong, Yonggang; Zhao, Xiandong
2018-04-01
This paper analyzes the disadvantages of the audio and video compression coding technology used by Voyage Data Recorder, and combines the improvement of performance of audio and video acquisition equipment. The thinking of improving the audio and video compression coding technology of the voyage data recorder is proposed, and the feasibility of adopting the new compression coding technology is analyzed from economy and technology two aspects.
Audio-based queries for video retrieval over Java enabled mobile devices
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Cheikh, Faouzi Alaya; Kiranyaz, Serkan; Gabbouj, Moncef
2006-02-01
In this paper we propose a generic framework for efficient retrieval of audiovisual media based on its audio content. This framework is implemented in a client-server architecture where the client application is developed in Java to be platform independent whereas the server application is implemented for the PC platform. The client application adapts to the characteristics of the mobile device where it runs such as screen size and commands. The entire framework is designed to take advantage of the high-level segmentation and classification of audio content to improve speed and accuracy of audio-based media retrieval. Therefore, the primary objective of this framework is to provide an adaptive basis for performing efficient video retrieval operations based on the audio content and types (i.e. speech, music, fuzzy and silence). Experimental results approve that such an audio based video retrieval scheme can be used from mobile devices to search and retrieve video clips efficiently over wireless networks.
Lin, Yu-You; Chiang, Wen-Chu; Hsieh, Ming-Ju; Sun, Jen-Tang; Chang, Yi-Chung; Ma, Matthew Huei-Ming
2018-02-01
This study aimed to conduct a systematic review and meta-analysis comparing the effect of video-assistance and audio-assistance on quality of dispatcher-instructed cardiopulmonary resuscitation (DI-CPR) for bystanders. Five databases were searched, including PubMed, Cochrane library, Embase, Scopus and NIH clinical trial, to find randomized control trials published before June 2017. Qualitative analysis and meta-analysis were undertaken to examine the difference between the quality of video-instructed and audio-instructed dispatcher-instructed bystander CPR. The database search yielded 929 records, resulting in the inclusion of 9 relevant articles in this study. Of these, 6 were included in the meta-analysis. Initiation of chest compressions was slower in the video-instructed group than in the audio-instructed group (median delay 31.5 s; 95% CI: 10.94-52.09). The difference in the number of chest compressions per minute between the groups was 19.9 (95% CI: 10.50-29.38) with significantly faster compressions in the video-instructed group than in the audio-instructed group (104.8 vs. 80.6). The odds ratio (OR) for correct hand positioning was 0.8 (95% CI: 0.53-1.30) when comparing the audio-instructed and video-instructed groups. The differences in chest compression depth (mm) and time to first ventilation (seconds) between the video-instructed group and audio-instructed group were 1.6 mm (95% CI: -8.75, 5.55) and 7.5 s (95% CI: -56.84, 71.80), respectively. Video-instructed DI-CPR significantly improved the chest compression rate compared to the audio-instructed method, and a trend for correctness of hand position was also observed. However, this method caused a delay in the commencement of bystander-initiated CPR in the simulation setting. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-modal gesture recognition using integrated model of motion, audio and video
NASA Astrophysics Data System (ADS)
Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko
2015-07-01
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.
Video-assisted segmentation of speech and audio track
NASA Astrophysics Data System (ADS)
Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.
1999-08-01
Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.
Defining Audio/Video Redundancy from a Limited Capacity Information Processing Perspective.
ERIC Educational Resources Information Center
Lang, Annie
1995-01-01
Investigates whether audio/video redundancy improves memory for television messages. Suggests a theoretical framework for classifying previous work and reinterpreting the results. Suggests general support for the notion that redundancy levels affect the capacity requirements of the message, which impact differentially on audio or visual…
ERIC Educational Resources Information Center
Barker, Bruce O.; Bannon, James
This paper describes the Hawaii Interactive Television System (HITS) program and provides an evaluation of the first year of broadcasts for the advanced placement (AP) calculus course. HITS allows two-way video-audio interaction among origination sites, but the configuration used by the Department of Education for its Teleschool program is the…
ERIC Educational Resources Information Center
Lawless-Reljic, Sabine Karine
2010-01-01
Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…
NASA Astrophysics Data System (ADS)
Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won
In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.
Hierarchical structure for audio-video based semantic classification of sports video sequences
NASA Astrophysics Data System (ADS)
Kolekar, M. H.; Sengupta, S.
2005-07-01
A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.
ERIC Educational Resources Information Center
Abbott, George L.; And Others
1987-01-01
This special feature focuses on recent developments in optical disk technology. Nine articles discuss current trends, large scale image processing, data structures for optical disks, the use of computer simulators to create optical disks, videodisk use in training, interactive audio video systems, impacts on federal information policy, and…
WORKSHOP ON MINING IMPACTED NATIVE AMERICAN LANDS CD
Multimedia Technology is an exciting mix of cutting-edge Information Technologies that utilize a variety of interactive structures, digital video and audio technologies, 3-D animation, high-end graphics, and peer-reviewed content that are then combined in a variety of user-friend...
Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.
Grigoras, Catalin
2007-04-11
This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.
Audio-video feature correlation: faces and speech
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal
1999-08-01
This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.
32 CFR 9.6 - Conduct of the trial.
Code of Federal Regulations, 2012 CFR
2012-07-01
... maximum extent practicable. Photography, video, or audio broadcasting, or recording of or at Commission proceedings shall be prohibited, except photography, video, and audio recording by the Commission pursuant to...
32 CFR 9.6 - Conduct of the trial.
Code of Federal Regulations, 2011 CFR
2011-07-01
... maximum extent practicable. Photography, video, or audio broadcasting, or recording of or at Commission proceedings shall be prohibited, except photography, video, and audio recording by the Commission pursuant to...
Cultural Specific Effects on the Recognition of Basic Emotions: A Study on Italian Subjects
NASA Astrophysics Data System (ADS)
Esposito, Anna; Riviello, Maria Teresa; Bourbakis, Nikolaos
The present work reports the results of perceptual experiments aimed to investigate if some of the basic emotions are perceptually privileged and if the cultural environment and the perceptual mode play a role in this preference. To this aim, Italian subjects were requested to assess emotional stimuli extracted from Italian and American English movies in the single (either video or audio alone) and the combined audio/video mode. Results showed that anger, fear, and sadness are better perceived than surprise, happiness in both the cultural environments (irony instead strongly depend on the language), that emotional information is affected by the communication mode and that language plays a role in assessing emotional information. Implications for the implementation of emotionally colored interactive systems are discussed.
Establishing a gold standard for manual cough counting: video versus digital audio recordings
Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A
2006-01-01
Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2011 CFR
2011-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2010 CFR
2010-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2014 CFR
2014-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2013 CFR
2013-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2012 CFR
2012-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
ERIC Educational Resources Information Center
He, Aiguo
2009-01-01
A real-time interactive distance lecture is a joint work that should be accomplished by the effort of the lecturer and his students in remote sites. It is important for the lecturer to get understanding information from the students which cannot be efficiently collected by only using video/audio channels between the lecturer and the students. This…
ERIC Educational Resources Information Center
da Silva, André Constantino; Freire, Fernanda Maria Pereira; de Arruda, Alan Victor Pereira; da Rocha, Heloísa Vieira
2013-01-01
e-Learning environments offer content, such text, audio, video, animations, using the Web infrastructure and they are designed to users interacting with keyboard, mouse and a medium-sized screen. Mobile devices, such as smartphones and tablets, have enough computation power to render Web pages, allowing browsing the Internet and access e-Learning…
NASA Astrophysics Data System (ADS)
Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency
2014-01-01
During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).
Validation of a digital audio recording method for the objective assessment of cough in the horse.
Duz, M; Whittaker, A G; Love, S; Parkin, T D H; Hughes, K J
2010-10-01
To validate the use of digital audio recording and analysis for quantification of coughing in horses. Part A: Nine simultaneous digital audio and video recordings were collected individually from seven stabled horses over a 1 h period using a digital audio recorder attached to the halter. Audio files were analysed using audio analysis software. Video and audio recordings were analysed for cough count and timing by two blinded operators on two occasions using a randomised study design for determination of intra-operator and inter-operator agreement. Part B: Seventy-eight hours of audio recordings obtained from nine horses were analysed once by two blinded operators to assess inter-operator repeatability on a larger sample. Part A: There was complete agreement between audio and video analyses and inter- and intra-operator analyses. Part B: There was >97% agreement between operators on number and timing of 727 coughs recorded over 78 h. The results of this study suggest that the cough monitor methodology used has excellent sensitivity and specificity for the objective assessment of cough in horses and intra- and inter-operator variability of recorded coughs is minimal. Crown Copyright 2010. Published by Elsevier India Pvt Ltd. All rights reserved.
Object detection in cinematographic video sequences for automatic indexing
NASA Astrophysics Data System (ADS)
Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel
2003-06-01
This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.
... Player Play video and audio files on Apple operating systems. mov Apple iTunes Download NLM podcasts and applications. ... Player Play video and audio files on PC operating systems. mp3 wav wmz About MedlinePlus Site Map FAQs ...
Say What? The Role of Audio in Multimedia Video
NASA Astrophysics Data System (ADS)
Linder, C. A.; Holmes, R. M.
2011-12-01
Audio, including interviews, ambient sounds, and music, is a critical-yet often overlooked-part of an effective multimedia video. In February 2010, Linder joined scientists working on the Global Rivers Observatory Project for two weeks of intensive fieldwork in the Congo River watershed. The team's goal was to learn more about how climate change and deforestation are impacting the river system and coastal ocean. Using stills and video shot with a lightweight digital SLR outfit and audio recorded with a pocket-sized sound recorder, Linder documented the trials and triumphs of working in the heart of Africa. Using excerpts from the six-minute Congo multimedia video, this presentation will illustrate how to record and edit an engaging audio track. Topics include interview technique, collecting ambient sounds, choosing and using music, and editing it all together to educate and entertain the viewer.
Manson, Joseph H; Gervais, Matthew M; Bryant, Gregory A
2018-01-01
Little is known about people's ability to detect subclinical psychopathy from others' quotidian social behavior, or about the correlates of variation in this ability. This study sought to address these questions using a thin slice personality judgment paradigm. We presented 108 undergraduate judges (70.4% female) with 1.5 minute video thin slices of zero-acquaintance triadic conversations among other undergraduates (targets: n = 105, 57.1% female). Judges completed self-report measures of general trust, caution, and empathy. Target individuals had completed the Levenson Self-Report Psychopathy (LSRP) scale. Judges viewed the videos in one of three conditions: complete audio, silent, or audio from which semantic content had been removed using low-pass filtering. Using a novel other-rating version of the LSRP, judges' ratings of targets' primary psychopathy levels were significantly positively associated with targets' self-reports, but only in the complete audio condition. Judge general trust and target LSRP interacted, such that judges higher in general trust made less accurate judgments with respect to targets higher in primary and total psychopathy. Results are consistent with a scenario in which psychopathic traits are maintained in human populations by negative frequency dependent selection operating through the costs of detecting psychopathy in others.
Hierarchical vs non-hierarchical audio indexation and classification for video genres
NASA Astrophysics Data System (ADS)
Dammak, Nouha; BenAyed, Yassine
2018-04-01
In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.
ERIC Educational Resources Information Center
Raths, David
2013-01-01
Ten years ago, integrating videoconferencing into a college course required considerable effort on the part of the instructor and IT support staff. Today, video- and web-conferencing tools are more sophisticated. Distance education has morphed from audio- and videocassettes featuring talking heads to a more interactive experience with greater…
Tele-EnREDando.com: A Multimedia WEB-CALL Software for Mobile Phones.
ERIC Educational Resources Information Center
Garcia, Jose Carlos
2002-01-01
Presents one of the world's first prototypes of language learning software for smart-phones. Tele-EnREDando.com is an Internet based multimedia application designed for 3G mobile phones with audio, video, and interactive exercises for learning Spanish for business. (Author/VWL)
The Effects of Three Methods of Observation on Couples in Interactional Research.
ERIC Educational Resources Information Center
Carpenter, Linda J.; Merkel, William T.
1988-01-01
Assessed the effects of three different methods of observation of couples (one-way mirror, audio recording, and video recording) on 30 volunteer, nonclinical married couples. Results suggest that types of observation do not produce significantly different effects on nonclinical couples. (Author/ABL)
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
Description of Audio-Visual Recording Equipment and Method of Installation for Pilot Training.
ERIC Educational Resources Information Center
Neese, James A.
The Audio-Video Recorder System was developed to evaluate the effectiveness of in-flight audio/video recording as a pilot training technique for the U.S. Air Force Pilot Training Program. It will be used to gather background and performance data for an experimental program. A detailed description of the system is presented and construction and…
ERIC Educational Resources Information Center
Inceçay, Volkan; Koçoglu, Zeynep
2017-01-01
The present study examined whether or not different input delivery modes have an effect on listening comprehension of Turkish students learning English at the university level. It investigated the effect of one single mode, which is audio-only, and three dual input delivery modes, which were audio-video, audio-video with target language subtitles…
Stochastic modeling of soundtrack for efficient segmentation and indexing of video
NASA Astrophysics Data System (ADS)
Naphade, Milind R.; Huang, Thomas S.
1999-12-01
Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.
Detection of goal events in soccer videos
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas
2005-01-01
In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.
Quantifying Engagement: Measuring Player Involvement in Human-Avatar Interactions
Norris, Anne E.; Weger, Harry; Bullinger, Cory; Bowers, Alyssa
2014-01-01
This research investigated the merits of using an established system for rating behavioral cues of involvement in human dyadic interactions (i.e., face-to-face conversation) to measure involvement in human-avatar interactions. Gameplay audio-video and self-report data from a Feasibility Trial and Free Choice study of an effective peer resistance skill building simulation game (DRAMA-RAMA™) were used to evaluate reliability and validity of the rating system when applied to human-avatar interactions. The Free Choice study used a revised game prototype that was altered to be more engaging. Both studies involved girls enrolled in a public middle school in Central Florida that served a predominately Hispanic (greater than 80%), low-income student population. Audio-video data were coded by two raters, trained in the rating system. Self-report data were generated using measures of perceived realism, predictability and flow administered immediately after game play. Hypotheses for reliability and validity were supported: Reliability values mirrored those found in the human dyadic interaction literature. Validity was supported by factor analysis, significantly higher levels of involvement in Free Choice as compared to Feasibility Trial players, and correlations between involvement dimension sub scores and self-report measures. Results have implications for the science of both skill-training intervention research and game design. PMID:24748718
The role of laryngoscopy in the diagnosis of spasmodic dysphonia.
Daraei, Pedram; Villari, Craig R; Rubin, Adam D; Hillel, Alexander T; Hapner, Edie R; Klein, Adam M; Johns, Michael M
2014-03-01
Spasmodic dysphonia (SD) can be difficult to diagnose, and patients often see multiple physicians for many years before diagnosis. Improving the speed of diagnosis for individuals with SD may decrease the time to treatment and improve patient quality of life more quickly. To assess whether the diagnosis of SD can be accurately predicted through auditory cues alone without the assistance of visual cues offered by laryngoscopic examination. Single-masked, case-control study at a specialized referral center that included patients who underwent laryngoscopic examination as part of a multidisciplinary workup for dysphonia. Twenty-two patients were selected in total: 10 with SD, 5 with vocal tremor, and 7 controls without SD or vocal tremor. The laryngoscopic examination was recorded, deidentified, and edited to make 3 media clips for each patient: video alone, audio alone, and combined video and audio. These clips were randomized and presented to 3 fellowship-trained laryngologist raters (A.D.R., A.T.H., and A.M.K.), who established the most probable diagnosis for each clip. Intrarater and interrater reliability were evaluated using repeat clips incorporated in the presentations. We measured diagnostic accuracy for video-only, audio-only, and combined multimedia clips. These measures were established before data collection. Data analysis was accomplished with analysis of variance and Tukey honestly significant differences. Of patients with SD, diagnostic accuracy was 10%, 73%, and 73% for video-only, audio-only, and combined, respectively (P < .001, df = 2). Of patients with vocal tremor, diagnostic accuracy was 93%, 73%, and 100% for video-only, audio-only, and combined, respectively (P = .05, df = 2). Of the controls, diagnostic accuracy was 81%, 19%, and 62% for video-only, audio-only, and combined, respectively (P < .001, df = 2). The diagnosis of SD during examination is based primarily on auditory cues. Viewing combined audio and video clips afforded no change in diagnostic accuracy compared with audio alone. Laryngoscopy serves an important role in the diagnosis of SD by excluding other pathologic causes and identifying vocal tremor.
Standards of e-Learning Based Distance Education
ERIC Educational Resources Information Center
Saurabh, Kumar
2006-01-01
The term distance education is commonly used to describe courses in which nearly all the interaction between the teacher and student takes place electronically. Electronic communication may take the form of audio, video, e-mail, chat, teleconferencing, and, increasingly, the Internet. Distance education courses range from short term training…
Smithsonian Folkways: Resources for World and Folk Music Multimedia
ERIC Educational Resources Information Center
Beegle, Amy Christine
2012-01-01
This column describes multimedia resources available to teachers on the Smithsonian Folkways website. In addition to massive collections of audio and video recordings and advanced search tools already available through this website, the Smithsonian Global Sound educational initiative brought detailed lesson plans and interactive features to the…
The Effectiveness of Low-Cost Tele-Lecturing.
ERIC Educational Resources Information Center
Muta, Hiromitsu; Kikuta, Reiko; Hamano, Takashi; Maesako, Takanori
1997-01-01
Compares distance education using PictureTel, a compressed-digital-video system via telephone lines (audio and visual interactive communication) in terms of its costs and effectiveness with traditional in-class education. Costing less than half the traditional approach, the study suggested distance education would be economical if used frequently.…
The Lived Experience of In-Service Teachers Using Synchronous Technology: A Phenomenological Study
ERIC Educational Resources Information Center
Vasquez, Sarah T.
2017-01-01
Unlike most online professional development opportunities, synchronous technology affords immediate communications for discussion and feedback while interacting with participants simultaneously through text, audio, video, and screen sharing. The purpose of this study is to find answers to meet the practical need to inform, design, and implement…
ERIC Educational Resources Information Center
Daher, Wajeeh; Baya'a, Nimer
2012-01-01
Learning in the cellular phone environment enables utilizing the multiple functions of the cellular phone, such as mobility, availability, interactivity, verbal and voice communication, taking pictures or recording audio and video, measuring time and transferring information. These functions together with mathematics-designated cellular phone…
Social Operational Information, Competence, and Participation in Online Collective Action
ERIC Educational Resources Information Center
Antin, Judd David
2010-01-01
Recent advances in interactive web technologies, combined with widespread broadband and mobile device adoption, have made online collective action commonplace. Millions of individuals work together to aggregate, annotate, and share digital text, audio, images, and video. Given the prevalence and importance of online collective action systems,…
Exclusively Visual Analysis of Classroom Group Interactions
ERIC Educational Resources Information Center
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-01-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data…
Introduction to Human Services, Chapter III. Video Script Package, Text, and Audio Script Package.
ERIC Educational Resources Information Center
Miami-Dade Community Coll., FL.
Video, textual, and audio components of the third module of a multi-media, introductory course on Human Services are presented. The module packages, developed at Miami-Dade Community College, deal with technology, social change, and problem dependencies. A video cassette script is first provided that explores the "traditional,""inner," and "other…
Effects of Exposure to Advertisements on Audience Impressions
NASA Astrophysics Data System (ADS)
Hasegawa, Hiroshi; Sato, Mie; Kasuga, Masao; Nagao, Yoshihide; Shono, Toru; Norose, Yuka; Oku, Ritsuya; Nogami, Akira; Miyazawa, Yoshitaka
This study investigated effects of listening and/or watching commercial-messages (CMs) on audience impressions. We carried out experiments of TV advertisements presentation in conditions of audio only, video only, and audio-video. As results, we confirmed the following two effects: image-multiple effect, that is, the audience brings to mind various images that are not directly expressed in the content, and marking-up effect, that is, the audience concentrates on some images that are directly expressed in the content. The image-multiple effect, in particular, strongly appeared under the audio only condition. Next, we investigated changes in the following seven subjective responses; usage image, experience, familiarity, exclusiveness, feeling at home, affection, and willingness to buy, after exposure to advertisements under conditions of audio only and audio-video. As a result, noting that the image-multiple effect became stronger as the evaluation scores of the responses increased.
NASA Astrophysics Data System (ADS)
Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.
2017-11-01
A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.
Service Academy 2009 Gender Relations Focus Groups
2009-09-01
names. No audio or video recording was made of any focus group session. All focus group sessions were governed by a number of ground rules, most...Male) – “I was going to say a preoccupation with sex or with a certain person, whether it’s severe porn addiction, whether it is that they’re...to create a negative scenario. So I think the interactive videos , like the suicide prevention ones, they tell you what to do exactly, they give you
Embedded security system for multi-modal surveillance in a railway carriage
NASA Astrophysics Data System (ADS)
Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry
2015-10-01
Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
Automatic summarization of soccer highlights using audio-visual descriptors.
Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc
2015-01-01
Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.
Early childhood numeracy in a multiage setting
NASA Astrophysics Data System (ADS)
Wood, Karen; Frid, Sandra
2005-10-01
This research is a case study examining numeracy teaching and learning practices in an early childhood multiage setting with Pre-Primary to Year 2 children. Data were collected via running records, researcher reflection notes, and video and audio recordings. Video and audio transcripts were analysed using a mathematical discourse and social interactions coding system designed by MacMillan (1998), while the running records and reflection notes contributed to descriptions of the children's interactions with each other and with the teachers. Teachers used an `assisted performance' approach to instruction that supported problem solving and inquiry processes in mathematics activities, and this, combined with a child-centred pedagogy and specific values about community learning, created a learning environment designed to stimulate and foster learning. The mathematics discourse analysis showed a use of explanatory language in mathematics discourse, and this language supported scaffolding among children for new mathematics concepts. These and other interactions related to peer sharing, tutoring and regulation also emerged as key aspects of students' learning practices. However, the findings indicated that multiage grouping alone did not support learning. Rather, effective learning was dependent upon the teacher's capacities to develop productive discussion among children, as well as implement developmentally appropriate curricula that addressed the needs of the different children.
A comparison of distance education instructional methods in occupational therapy.
Jedlicka, Janet S; Brown, Sarah W; Bunch, Ashley E; Jaffe, Lynn E
2002-01-01
The progression of technology is rapidly bringing new opportunities to students and academic institutions, resulting in a need for additional information to determine the most effective strategies for teaching distance learners. The purpose of this study was to compare the effectiveness of three instructional strategies (two-way interactive video and audio, chat rooms, and independent learning) and student preferences regarding instructional methods in a mental health programming distance learning course. Precourse and postcourse surveys were completed by 22 occupational therapy students enrolled in the course. Effectiveness of the teaching methods was determined based on the results of students' examinations. The findings indicated that there were no statistically significant differences in student performance on multiple-choice examinations using the three instructional methods. Of students, 77% indicated a preference for two-way interactive video and audio instruction. To provide effective education via distance learning methods, faculty members need to structure assignments that facilitate interaction and communication among learners. As distance education becomes more commonplace, it is important to identify the methods of instruction that are the most effective in delivering essential course content and the methods that provide the atmosphere most conducive to learning.
Highlight summarization in golf videos using audio signals
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Kim, Jin Young
2008-01-01
In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.
Overview of the Use of Media in Distance Education. I.E.T. Paper on Broadcasting No. 220.
ERIC Educational Resources Information Center
Bates, A. W.
This paper reviews the use of different audio-visual media in distance education, including terrestrial broadcasting, cable satellite, videocassettes, audiocassettes, telephone teaching, viewdata, teletext, microcomputers, and interactive video. Trends in distance education are also summarized and related to trends in media technology development.…
Using Web-Conferencing with Primarily Interactive Television Courses.
ERIC Educational Resources Information Center
Collins, Mauri P.; Berge, Zane L.
Over the past seven years, Northern Arizona University (NAU) has implemented NAUnet, a professional-broadcast-quality, two-way audio, two-way video instructional television (IITV) system. The IITV system provides a face-to-face environment where students and faculty can see and hear each other and engage in discussion. Recently, several courses…
Being There: The Case for Telepresence
ERIC Educational Resources Information Center
Schaffhauser, Dian
2010-01-01
In this article, the author talks about telepresence, a combination of real-time video, audio, and interactive technologies that gives people in distributed locations a collaborative experience that's as close to being in the same room as current technology allows. In a culture that's still adjusting to iPhone-size screen displays and choppy cell…
Speed on the dance floor: Auditory and visual cues for musical tempo.
London, Justin; Burger, Birgitta; Thompson, Marc; Toiviainen, Petri
2016-02-01
Musical tempo is most strongly associated with the rate of the beat or "tactus," which may be defined as the most prominent rhythmic periodicity present in the music, typically in a range of 1.67-2 Hz. However, other factors such as rhythmic density, mean rhythmic inter-onset interval, metrical (accentual) structure, and rhythmic complexity can affect perceived tempo (Drake, Gros, & Penel, 1999; London, 2011 Drake, Gros, & Penel, 1999; London, 2011). Visual information can also give rise to a perceived beat/tempo (Iversen, et al., 2015), and auditory and visual temporal cues can interact and mutually influence each other (Soto-Faraco & Kingstone, 2004; Spence, 2015). A five-part experiment was performed to assess the integration of auditory and visual information in judgments of musical tempo. Participants rated the speed of six classic R&B songs on a seven point scale while observing an animated figure dancing to them. Participants were presented with original and time-stretched (±5%) versions of each song in audio-only, audio+video (A+V), and video-only conditions. In some videos the animations were of spontaneous movements to the different time-stretched versions of each song, and in other videos the animations were of "vigorous" versus "relaxed" interpretations of the same auditory stimulus. Two main results were observed. First, in all conditions with audio, even though participants were able to correctly rank the original vs. time-stretched versions of each song, a song-specific tempo-anchoring effect was observed, such that sped-up versions of slower songs were judged to be faster than slowed-down versions of faster songs, even when their objective beat rates were the same. Second, when viewing a vigorous dancing figure in the A+V condition, participants gave faster tempo ratings than from the audio alone or when viewing the same audio with a relaxed dancing figure. The implications of this illusory tempo percept for cross-modal sensory integration and working memory are discussed, and an "energistic" account of tempo perception is proposed. Copyright © 2015 Elsevier B.V. All rights reserved.
Interactive video audio system: communication server for INDECT portal
NASA Astrophysics Data System (ADS)
Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem
2014-05-01
The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.
Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions
NASA Astrophysics Data System (ADS)
Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena
2007-08-01
Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Dissemination of radiological information using enhanced podcasts.
Thapa, Mahesh M; Richardson, Michael L
2010-03-01
Podcasts and vodcasts (video podcasts) have become popular means of sharing educational information via the Internet. In this article, we introduce another method, an enhanced podcast, which allows images to be displayed with the audio. Bookmarks and URLs may also be imbedded within the presentation. This article describes a step-by-step tutorial for recording and distributing an enhanced podcast using the Macintosh operating system. Enhanced podcasts can also be created on the Windows platform using other software. An example of an enhanced podcast and a demonstration video of all the steps described in this article are available online at web.mac.com/mthapa. An enhanced podcast is an effective method of delivering radiological information via the Internet. Viewing images while simultaneously listening to audio content allows the user to have a richer experience than with a simple podcast. Incorporation of bookmarks and URLs within the presentation will make learning more efficient and interactive. The use of still images rather than video clips equates to a much smaller file size for an enhanced podcast compared to a vodcast, allowing quicker upload and download times.
Video mining using combinations of unsupervised and supervised learning techniques
NASA Astrophysics Data System (ADS)
Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou
2003-12-01
We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.
The Audio Description as a Physics Teaching Tool
ERIC Educational Resources Information Center
Cozendey, Sabrina; Costa, Maria da Piedade
2016-01-01
This study analyses the use of audio description in teaching physics concepts, aiming to determine the variables that influence the understanding of the concept. One education resource was audio described. For make the audio description the screen was freezing. The video with and without audio description should be presented to students, so that…
Naigles, Letitia R; Tovar, Andrea T
2012-12-14
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.(1) Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.(6,8,12,23) However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.(5,14,19,25) Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),(5) or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.(7,12,13,16) We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.(2,4,9,11,22) This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.(2,4,11,18,22,26) This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.(10,14,17,21,24) Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Traumaculture and Telepathetic Cyber Fiction
NASA Astrophysics Data System (ADS)
Drinkall, Jacquelene
This paper explores the interactive CD-ROM No Other Symptoms: Time Travelling with Rosalind Brodsky, usingtelepathetic socio-psychological, psychoanalytic and narrative theories. The CD-ROMexists as a contemporary artwork and published interactive hardcover book authored by painter and new-media visual artist Suzanne Treister. The artwork incorporates Treister's paintings, writing, photoshop, animation, video and audio work with narrative structures taken from world history, the history of psychoanalysis, futurist science and science fiction, family history and biography.
A Novel Method for Real-Time Audio Recording With Intraoperative Video.
Sugamoto, Yuji; Hamamoto, Yasuyoshi; Kimura, Masayuki; Fukunaga, Toru; Tasaki, Kentaro; Asai, Yo; Takeshita, Nobuyoshi; Maruyama, Tetsuro; Hosokawa, Takashi; Tamachi, Tomohide; Aoyama, Hiromichi; Matsubara, Hisahiro
2015-01-01
Although laparoscopic surgery has become widespread, effective and efficient education in laparoscopic surgery is difficult. Instructive laparoscopy videos with appropriate annotations are ideal for initial training in laparoscopic surgery; however, the method we use at our institution for creating laparoscopy videos with audio is not generalized, and there have been no detailed explanations of any such method. Our objectives were to demonstrate the feasibility of low-cost simple methods for recording surgical videos with audio and to perform a preliminary safety evaluation when obtaining these recordings during operations. We devised a method for the synchronous recording of surgical video with real-time audio in which we connected an amplifier and a wireless microphone to an existing endoscopy system and its equipped video-recording device. We tested this system in 209 cases of laparoscopic surgery in operating rooms between August 2010 and July 2011 and prospectively investigated the results of the audiovisual recording method and examined intraoperative problems. Numazu City Hospital in Numazu city, Japan. Surgeons, instrument nurses, and medical engineers. In all cases, the synchronous input of audio and video was possible. The recording system did not cause any inconvenience to the surgeon, assistants, instrument nurse, sterilized equipment, or electrical medical equipment. Statistically significant differences were not observed between the audiovisual group and control group regarding the operating time, which had been divided into 2 slots-performed by the instructors or by trainees (p > 0.05). This recording method is feasible and considerably safe while posing minimal difficulty in terms of technology, time, and expense. We recommend this method for both surgical trainees who wish to acquire surgical skills effectively and medical instructors who wish to teach surgical skills effectively. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Early Childhood Numeracy in a Multiage Setting
ERIC Educational Resources Information Center
Wood, Karen; Frid, Sandra
2005-01-01
This research is a case study examining numeracy teaching and learning practices in an early childhood multiage setting with Pre-Primary to Year 2 children. Data were collected via running records, researcher reflection notes, and video and audio recordings. Video and audio transcripts were analysed using a mathematical discourse and social…
Promoting Early Literacy for Diverse Learners Using Audio and Video Technology
ERIC Educational Resources Information Center
Skouge, James R.; Rao, Kavita; Boisvert, Precille C.
2007-01-01
Practical applications of multimedia technologies that support early literacy are described and evaluated, including several variations of recorded books and stories, utilizing mainstream audio and video recording appropriate for libraries and schools. Special emphasis is given to the needs of children with disabilities and children who are…
NASA Astrophysics Data System (ADS)
Barbieri, Ivano; Lambruschini, Paolo; Raggio, Marco; Stagnaro, Riccardo
2007-12-01
The increase in the availability of bandwidth for wireless links, network integration, and the computational power on fixed and mobile platforms at affordable costs allows nowadays for the handling of audio and video data, their quality making them suitable for medical application. These information streams can support both continuous monitoring and emergency situations. According to this scenario, the authors have developed and implemented the mobile communication system which is described in this paper. The system is based on ITU-T H.323 multimedia terminal recommendation, suitable for real-time data/video/audio and telemedical applications. The audio and video codecs, respectively, H.264 and G723.1, were implemented and optimized in order to obtain high performance on the system target processors. Offline media streaming storage and retrieval functionalities were supported by integrating a relational database in the hospital central system. The system is based on low-cost consumer technologies such as general packet radio service (GPRS) and wireless local area network (WLAN or WiFi) for lowband data/video transmission. Implementation and testing were carried out for medical emergency and telemedicine application. In this paper, the emergency case study is described.
Task-Based Oral Computer-Mediated Communication and L2 Vocabulary Acquisition
ERIC Educational Resources Information Center
Yanguas, Inigo
2012-01-01
The present study adds to the computer-mediated communication (CMC) literature by exploring oral learner-to-learner interaction using Skype, a free and widely used Internet software program. In particular, this task-based study has a two-fold goal. Firstly, it explores possible differences between two modes of oral CMC (audio and video) and…
Using Web 2.0 for Learning in the Community
ERIC Educational Resources Information Center
Mason, Robin; Rennie, Frank
2007-01-01
This paper describes the use of a range of Web 2.0 technologies to support the development of community for a newly formed Land Trust on the Isle of Lewis, in NW Scotland. The application of social networking tools in text, audio and video has several purposes: informal learning about the area to increase tourism, community interaction,…
Survey on Uses of Distance Learning in the U.S.
ERIC Educational Resources Information Center
Downing, Diane E.
A December 1983 survey queried the chief state school officers of the 50 states on the extent to which distance learning techniques are used in public education in their states. Respondents were asked to focus on interactive forms of distance learning, such as audio and video teleconferencing. A total of 28 states (56%) responded, with the…
Extending Talk on a Prescribed Discussion Topic in a Learner-Native Speaker eTandem Learning Task
ERIC Educational Resources Information Center
Black, Emily
2017-01-01
Opportunities for language learners to access authentic input and engage in consequential interactions with native speakers of their target language abound in this era of computer mediated communication. Synchronous audio/video calling software represents one opportunity to access such input and address the challenges of developing pragmatic and…
Action Research to Improve Methods of Delivery and Feedback in an Access Grid Room Environment
ERIC Educational Resources Information Center
McArthur, Lynne C.; Klass, Lara; Eberhard, Andrew; Stacey, Andrew
2011-01-01
This article describes a qualitative study which was undertaken to improve the delivery methods and feedback opportunity in honours mathematics lectures which are delivered through Access Grid Rooms. Access Grid Rooms are facilities that provide two-way video and audio interactivity across multiple sites, with the inclusion of smart boards. The…
How to Plug into Teleconferencing/Reach Out and Train Somebody.
ERIC Educational Resources Information Center
Jenkins, Thomas M.; Cushing, David
1983-01-01
Teleconferencing, as an interactive group communication through an electronic medium joining three or more people at two or more locations, can take one of three forms: audio, audiographic, or full-motion video. This multilocation technology is used in training and in conducting meetings and conferences; it works as a money- and time-saving tool.…
ERIC Educational Resources Information Center
Domingo, Myrrh
2012-01-01
In our contemporary society, digital texts circulate more readily and extend beyond page-bound formats to include interactive representations such as online newsprint with hyperlinks to audio and video files. This is to say that multimodality combined with digital technologies extends grammar to include voice, visual, and music, among other modes…
Articulating nurse practitioner practice using King's theory of goal attainment.
de Leon-Demare, Kathleen; MacDonald, Jane; Gregory, David M; Katz, Alan; Halas, Gayle
2015-11-01
To further understand the interactions between nurse practitioners (NPs) and patients, King's nursing theory of goal attainment was applied as the conceptual framework to describe the interactions between NPs and patients in the primary care setting. Six dyads of NPs and their patients were video- and audio-taped over three consecutive clinic visits. For the purposes of this arm of the study, the audio-taped interactions were transcribed and then coded using King's concepts in her theory of goal attainment. King's theory was applicable to describe NP practice. King's concepts and processes of nurse-patient interactions, such as disturbances, mutual goal setting, and transactions, were observed in NP-patient interactions. Disturbances during clinical encounters were essential in the progression toward goal attainment. Elements, such as social exchange, symptom reporting, role explanation, and information around clinical processes facilitated relationship building. NPs as practitioners need to be reflective of their own practice, embrace disturbances in the clinical encounter, and attend to these as opportunities for mutual goal setting. ©2015 American Association of Nurse Practitioners.
Multidimensional QoE of Multiview Video and Selectable Audio IP Transmission
Nunome, Toshiro; Ishida, Takuya
2015-01-01
We evaluate QoE of multiview video and selectable audio (MVV-SA), in which users can switch not only video but also audio according to a viewpoint change request, transmitted over IP networks by a subjective experiment. The evaluation is performed by the semantic differential (SD) method with 13 adjective pairs. In the subjective experiment, we ask assessors to evaluate 40 stimuli which consist of two kinds of UDP load traffic, two kinds of fixed additional delay, five kinds of playout buffering time, and selectable or unselectable audio (i.e., MVV-SA or the previous MVV-A). As a result, MVV-SA gives higher presence to the user than MVV-A and then enhances QoE. In addition, we employ factor analysis for subjective assessment results to clarify the component factors of QoE. We then find that three major factors affect QoE in MVV-SA. PMID:26106640
47 CFR 73.3617 - Information available on the Internet.
Code of Federal Regulations, 2010 CFR
2010-10-01
... include copies of public notices and texts of recent decisions. The Media Bureau's address is http://www.fcc.gov/mb/; the Audio Division's address is http://www.fcc.gov/mmb/audio; the Video Division's address is http://www.fcc.gov/mb/video; the Policy Division's address is http://www.fcc.gov/mb/policy; the...
Culturally Diverse Videos, Audios, and CD-ROMs for Children and Young Adults.
ERIC Educational Resources Information Center
Wood, Irene
The purpose of this book is to help librarians develop high quality video, audio, and CD-ROM collections for preschool through high school learning with titles that reflect the ethnic heritage and experience of the diverse North American population, primarily African Americans, Asian Americans, Hispanic Americans, and Native Americans. The more…
47 CFR 73.3617 - Information available on the Internet.
Code of Federal Regulations, 2013 CFR
2013-10-01
....fcc.gov/mb/; the Audio Division's address is http://www.fcc.gov/mmb/audio; the Video Division's address is http://www.fcc.gov/mb/video; the Policy Division's address is http://www.fcc.gov/mb/policy; the Engineering Division's address is http://www.fcc.gov/mb/engineering; and the Industry Analysis Division's...
47 CFR 73.3617 - Information available on the Internet.
Code of Federal Regulations, 2012 CFR
2012-10-01
....fcc.gov/mb/; the Audio Division's address is http://www.fcc.gov/mmb/audio; the Video Division's address is http://www.fcc.gov/mb/video; the Policy Division's address is http://www.fcc.gov/mb/policy; the Engineering Division's address is http://www.fcc.gov/mb/engineering; and the Industry Analysis Division's...
47 CFR 73.3617 - Information available on the Internet.
Code of Federal Regulations, 2014 CFR
2014-10-01
....fcc.gov/mb/; the Audio Division's address is http://www.fcc.gov/mmb/audio; the Video Division's address is http://www.fcc.gov/mb/video; the Policy Division's address is http://www.fcc.gov/mb/policy; the Engineering Division's address is http://www.fcc.gov/mb/engineering; and the Industry Analysis Division's...
Audio and Video Reflections to Promote Social Justice
ERIC Educational Resources Information Center
Boske, Christa
2011-01-01
Purpose: The purpose of this paper is to examine how 15 graduate students enrolled in a US school leadership preparation program understand issues of social justice and equity through a reflective process utilizing audio and/or video software. Design/methodology/approach: The study is based on the tradition of grounded theory. The researcher…
Commercial Complexity and Local and Global Involvement in Programs: Effects on Viewer Responses.
ERIC Educational Resources Information Center
Oberman, Heiko; Thorson, Esther
A study investigated the effects of local (momentary) and global (whole program) involvement in program context and the effects of message complexity on the retention of television commercials. Sixteen commercials, categorized as simple video/simple audio through complex video/complex audio were edited into two globally high- and two globally…
Caffery, Liam J; Smith, Anthony C
2015-09-01
The use of fourth-generation (4G) mobile telecommunications to provide real-time video consultations were investigated in this study with the aims of determining if 4G is a suitable telecommunications technology; and secondly, to identify if variation in perceived audio and video quality were due to underlying network performance. Three patient end-points that used 4G Internet connections were evaluated. Consulting clinicians recorded their perception of audio and video quality using the International Telecommunications Union scales during clinics with these patient end-points. These scores were used to calculate a mean opinion score (MOS). The network performance metrics were obtained for each session and the relationships between these metrics and the session's quality scores were tested. Clinicians scored the quality of 50 hours of video consultations, involving 36 clinic sessions. The MOS for audio was 4.1 ± 0.62 and the MOS for video was 4.4 ± 0.22. Image impairment and effort to listen were also rated favourably. There was no correlation between audio or video quality and the network metrics of packet loss or jitter. These findings suggest that 4G networks are an appropriate telecommunication technology to deliver real-time video consultations. Variations in quality scores observed during this study were not explained by the packet loss and jitter in the underlying network. Before establishing a telemedicine service, the performance of the 4G network should be assessed at the location of the proposed service. This is due to known variability in performance of 4G networks. © The Author(s) 2015.
ERIC Educational Resources Information Center
Platten, Marvin R.; Barker, Bruce O.
The Texas Interactive Instructional Network (TI-IN), a private satellite system that provides one-way video and two-way audio communication, was used for a two-year pilot project which was conducted to determine if satellite instruction could be used successfully to share educational resources among institutions. Models of Teaching, a graduate…
An Internet-Based Real-Time Audiovisual Link for Dual MEG Recordings
Zhdanov, Andrey; Nurminen, Jussi; Baess, Pamela; Hirvenkari, Lotta; Jousmäki, Veikko; Mäkelä, Jyrki P.; Mandel, Anne; Meronen, Lassi; Hari, Riitta; Parkkonen, Lauri
2015-01-01
Hyperscanning Most neuroimaging studies of human social cognition have focused on brain activity of single subjects. More recently, “two-person neuroimaging” has been introduced, with simultaneous recordings of brain signals from two subjects involved in social interaction. These simultaneous “hyperscanning” recordings have already been carried out with a spectrum of neuroimaging modalities, such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and functional near-infrared spectroscopy (fNIRS). Dual MEG Setup We have recently developed a setup for simultaneous magnetoencephalographic (MEG) recordings of two subjects that communicate in real time over an audio link between two geographically separated MEG laboratories. Here we present an extended version of the setup, where we have added a video connection and replaced the telephone-landline-based link with an Internet connection. Our setup enabled transmission of video and audio streams between the sites with a one-way communication latency of about 130 ms. Our software that allows reproducing the setup is publicly available. Validation We demonstrate that the audiovisual Internet-based link can mediate real-time interaction between two subjects who try to mirror each others’ hand movements that they can see via the video link. All the nine pairs were able to synchronize their behavior. In addition to the video, we captured the subjects’ movements with accelerometers attached to their index fingers; we determined from these signals that the average synchronization accuracy was 215 ms. In one subject pair we demonstrate inter-subject coherence patterns of the MEG signals that peak over the sensorimotor areas contralateral to the hand used in the task. PMID:26098628
NASA Technical Reports Server (NTRS)
Smith, Michael A.; Kanade, Takeo
1997-01-01
Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.
Initial utilization of the CVIRB video production facility
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Busquets, Anthony M.; Hogge, Thomas W.
1987-01-01
Video disk technology is one of the central themes of a technology demonstrator workstation being assembled as a man/machine interface for the Space Station Data Management Test Bed at Johnson Space Center. Langley Research Center personnel involved in the conception and implementation of this workstation have assembled a video production facility to allow production of video disk material for this propose. This paper documents the initial familiarization efforts in the field of video production for those personnel and that facility. Although the entire video disk production cycle was not operational for this initial effort, the production of a simulated disk on video tape did acquaint the personnel with the processes involved and with the operation of the hardware. Invaluable experience in storyboarding, script writing, audio and video recording, and audio and video editing was gained in the production process.
ERIC Educational Resources Information Center
Graf, Klaus-D.
We have established an environment for German-Japanese school education projects using real time interactive audio-visual distance learning between remote classrooms. In periods of 8-12 weeks, two classes are dealing with the same subject matter, exchanging materials and results via e-mail and Internet. At 3 or 4 occasions the classes met on…
ERIC Educational Resources Information Center
Bequette, James W.; Brennan, Colleen
2008-01-01
Since the mid-1980s, arts policymakers in Minnesota have positioned "media arts"--defined as the "study and practice of examining human communication through photography, film or video, audio, computer or digital arts, and interactive media"--within the realm of aesthetic education and considered it one of six arts areas. This…
Reach Out and Touch Someone: Utilizing Two-Way Interactive Audio Video for Distant Audiences.
ERIC Educational Resources Information Center
Cutshall, Rex
In fall 1995, Vincennes University, a two-year college in Indiana, began offering an "Introduction to Business" course to personnel at a manufacturing company located approximately 5 hours from the college. In spring 1996, the same course was also delivered to a high school located over 2 hours from the college. The course was delivered…
Multimodal Speaker Diarization.
Noulas, A; Englebienne, G; Krose, B J A
2012-01-01
We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.
McGurk Effect in Gender Identification: Vision Trumps Audition in Voice Judgments.
Peynircioǧlu, Zehra F; Brent, William; Tatz, Joshua R; Wyatt, Jordan
2017-01-01
Demonstrations of non-speech McGurk effects are rare, mostly limited to emotion identification, and sometimes not considered true analogues. We presented videos of males and females singing a single syllable on the same pitch and asked participants to indicate the true range of the voice-soprano, alto, tenor, or bass. For one group of participants, the gender shown on the video matched the gender of the voice heard, and for the other group they were mismatched. Soprano or alto responses were interpreted as "female voice" decisions and tenor or bass responses as "male voice" decisions. Identification of the voice gender was 100% correct in the preceding audio-only condition. However, whereas performance was also 100% correct in the matched video/audio condition, it was only 31% correct in the mismatched video/audio condition. Thus, the visual gender information overrode the voice gender identification, showing a robust non-speech McGurk effect.
ERIC Educational Resources Information Center
Teng, Tian-Lih; Taveras, Marypat
2004-01-01
This article outlines the evolution of a unique distance education program that began as a hybrid--combining face-to-face instruction with asynchronous online teaching--and evolved to become an innovative combination of synchronous education using live streaming video, audio, and chat over the Internet, blended with asynchronous online discussions…
Code of Federal Regulations, 2010 CFR
2010-07-01
..., and sex. (ii) The substance of the offenses of which the individual is accused or suspected. (iii) The... courtroom, and close a session. Video and audio recording and taking of photographs, except for the purpose... discretion, permit contemporaneous closed-circuit video or audio transmission to permit viewing or hearing by...
A Comparison of Students' Performances Using Audio Only and Video Media Methods
ERIC Educational Resources Information Center
Sulaiman, Norazean; Muhammad, Ahmad Mazli; Ganapathy, Nurul Nadiah Dewi Faizul; Khairuddin, Zulaikha; Othman, Salwa
2017-01-01
Listening is a very crucial skill to be learnt in second language classroom because it is essential for the development of spoken language proficiency (Hamouda, 2013). The aim of this study is to investigate the significant differences in terms of students' performance when using traditional (audio-only) method and video media method. The data of…
Audio-Visual Aid in Teaching "Fatty Liver"
ERIC Educational Resources Information Center
Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha
2016-01-01
Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…
Constructing a Streaming Video-Based Learning Forum for Collaborative Learning
ERIC Educational Resources Information Center
Chang, Chih-Kai
2004-01-01
As web-based courses using videos have become popular in recent years, the issue of managing audio-visual aids has become pertinent. Generally, the contents of audio-visual aids may include a lecture, an interview, a report, or an experiment, which may be transformed into a streaming format capable of making the quality of Internet-based videos…
ERIC Educational Resources Information Center
Li, Chenxi; Wu, Ligao; Li, Chen; Tang, Jinlan
2017-01-01
This work-in-progress doctoral research project aims to identify meaning negotiation patterns in synchronous audio and video Computer-Mediated Communication (CMC) environments based on the model of CMC text chat proposed by Smith (2003). The study was conducted in the Institute of Online Education at Beijing Foreign Studies University. Four dyads…
Agency Video, Audio and Imagery Library
NASA Technical Reports Server (NTRS)
Grubbs, Rodney
2015-01-01
The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.
NASA Astrophysics Data System (ADS)
Zhao, Haiwu; Wang, Guozhong; Hou, Gang
2005-07-01
AVS is a new digital audio-video coding standard established by China. AVS will be used in digital TV broadcasting and next general optical disk. AVS adopted many digital audio-video coding techniques developed by Chinese company and universities in recent years, it has very low complexity compared to H.264, and AVS will charge very low royalty fee through one-step license including all AVS tools. So AVS is a good and competitive candidate for Chinese DTV and next generation optical disk. In addition, Chinese government has published a plan for satellite TV signal directly to home(DTH) and a telecommunication satellite named as SINO 2 will be launched in 2006. AVS will be also one of the best hopeful candidates of audio-video coding standard on satellite signal transmission.
Teaching the blind to find their way by playing video games.
Merabet, Lotfi B; Connors, Erin C; Halko, Mark A; Sánchez, Jaime
2012-01-01
Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world.
Weinstein, Ronald S; López, Ana Mariá; Barker, Gail P; Krupinski, Elizabeth A; Beinar, Sandra J; Major, Janet; Skinner, Tracy; Holcomb, Michael J; McNeely, Richard A
2007-10-01
The Institute for Advanced Telemedicine and Telehealth (i.e., T-Health Institute), a division of the state-wide Arizona Telemedicine Program (ATP), specializes in the creation of innovative health care education programs. This paper describes a first-of-a-kind video amphitheater specifically designed to promote communication within heterogeneous student groups training in the various health care professions. The amphitheater has an audio-video system that facilitates the assembly of ad hoc "in-the-room" electronic interdisciplinary student groups. Off-site faculty members and students can be inserted into groups by video conferencing. When fully implemented, every student will have a personal video camera trained on them, a head phone/microphone, and a personal voice channel. A command and control system will manage the video inputs of the individual participant's head-and-shoulder video images. An audio mixer will manage the separate voice channels of the individual participants and mix them into individual group-specific voice channels for use by the groups' participants. The audio-video system facilitates the easy reconfiguration of the interprofessional electronic groups, viewed on the video wall, without the individual participants in the electronic groups leaving their seats. The amphitheater will serve as a classroom as well as a unique education research laboratory.
Effect of tape recording on perturbation measures.
Jiang, J; Lin, E; Hanson, D G
1998-10-01
Tape recorders have been shown to affect measures of voice perturbation. Few studies, however, have been conducted to quantitatively justify the use or exclusion of certain types of recorders in voice perturbation studies. This study used sinusoidal and triangular waves and synthesized vowels to compare perturbation measures extracted from directly digitized signals with those recorded and played back through various tape recorders, including 3 models of digital audio tape recorders, 2 models of analog audio cassette tape recorders, and 2 models of video tape recorders. Signal contamination for frequency perturbation values was found to be consistently minimal with digital recorders (percent jitter = 0.01%-0.02%), mildly increased with video recorders (0.05%-0.10%), moderately increased with a high-quality analog audio cassette tape recorder (0.15%), and most prominent with a low-quality analog audio cassette tape recorder (0.24%). Recorder effect on amplitude perturbation measures was lowest in digital recorders (percent shimmer = 0.09%-0.20%), mildly to moderately increased in video recorders and a high-quality analog audio cassette tape recorder (0.25%-0.45%), and most prominent in a low-quality analog audio cassette tape recorder (0.98%). The effect of cassette tape material, length of spooled tape, and duration of analysis were also tested and are discussed.
Experienced quality factors: qualitative evaluation approach to audiovisual quality
NASA Astrophysics Data System (ADS)
Jumisko-Pyykkö, Satu; Häkkinen, Jukka; Nyman, Göte
2007-02-01
Subjective evaluation is used to identify impairment factors of multimedia quality. The final quality is often formulated via quantitative experiments, but this approach has its constraints, as subject's quality interpretations, experiences and quality evaluation criteria are disregarded. To identify these quality evaluation factors, this study examined qualitatively the criteria participants used to evaluate audiovisual video quality. A semi-structured interview was conducted with 60 participants after a subjective audiovisual quality evaluation experiment. The assessment compared several, relatively low audio-video bitrate ratios with five different television contents on mobile device. In the analysis, methodological triangulation (grounded theory, Bayesian networks and correspondence analysis) was applied to approach the qualitative quality. The results showed that the most important evaluation criteria were the factors of visual quality, contents, factors of audio quality, usefulness - followability and audiovisual interaction. Several relations between the quality factors and the similarities between the contents were identified. As a research methodological recommendation, the focus on content and usage related factors need to be further examined to improve the quality evaluation experiments.
About subjective evaluation of adaptive video streaming
NASA Astrophysics Data System (ADS)
Tavakoli, Samira; Brunnström, Kjell; Garcia, Narciso
2015-03-01
The usage of HTTP Adaptive Streaming (HAS) technology by content providers is increasing rapidly. Having available the video content in multiple qualities, using HAS allows to adapt the quality of downloaded video to the current network conditions providing smooth video-playback. However, the time-varying video quality by itself introduces a new type of impairment. The quality adaptation can be done in different ways. In order to find the best adaptation strategy maximizing users perceptual quality it is necessary to investigate about the subjective perception of adaptation-related impairments. However, the novelties of these impairments and their comparably long time duration make most of the standardized assessment methodologies fall less suited for studying HAS degradation. Furthermore, in traditional testing methodologies, the quality of the video in audiovisual services is often evaluated separated and not in the presence of audio. Nevertheless, the requirement of jointly evaluating the audio and the video within a subjective test is a relatively under-explored research field. In this work, we address the research question of determining the appropriate assessment methodology to evaluate the sequences with time-varying quality due to the adaptation. This was done by studying the influence of different adaptation related parameters through two different subjective experiments using a methodology developed to evaluate long test sequences. In order to study the impact of audio presence on quality assessment by the test subjects, one of the experiments was done in the presence of audio stimuli. The experimental results were subsequently compared with another experiment using the standardized single stimulus Absolute Category Rating (ACR) methodology.
Robust audio-visual speech recognition under noisy audio-video conditions.
Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji
2014-02-01
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
For Kids, by Kids: Our City Podcast
ERIC Educational Resources Information Center
Vincent, Tony; van't Hooft, Mark
2007-01-01
In this article, the authors discuss podcasting and provide ways on how to create podcasts. A podcast is an audio or video file that is posted on the web that can easily be cataloged and automatically downloaded to a computer or mobile device capable of playing back audio or video files. Podcasting is a powerful tool for educators to get students…
47 CFR 15.115 - TV interface devices, including cable system terminal devices.
Code of Federal Regulations, 2014 CFR
2014-10-01
... times the square root of (R) for the video signal and 155 times the square root of (R) for the audio... and 77.5 times the square root of (R) for the audio signal. (2) At any RF output terminal, the maximum... video cassette recorders continue to be subject to the provisions for general TV interface devices. (c...
47 CFR 15.115 - TV interface devices, including cable system terminal devices.
Code of Federal Regulations, 2012 CFR
2012-10-01
... times the square root of (R) for the video signal and 155 times the square root of (R) for the audio... and 77.5 times the square root of (R) for the audio signal. (2) At any RF output terminal, the maximum... video cassette recorders continue to be subject to the provisions for general TV interface devices. (c...
47 CFR 15.115 - TV interface devices, including cable system terminal devices.
Code of Federal Regulations, 2013 CFR
2013-10-01
... times the square root of (R) for the video signal and 155 times the square root of (R) for the audio... and 77.5 times the square root of (R) for the audio signal. (2) At any RF output terminal, the maximum... video cassette recorders continue to be subject to the provisions for general TV interface devices. (c...
How Much Videos Win over Audios in Listening Instruction for EFL Learners
ERIC Educational Resources Information Center
Yasin, Burhanuddin; Mustafa, Faisal; Permatasari, Rizki
2017-01-01
This study aims at comparing the benefits of using videos instead of audios for improving students' listening skills. This experimental study used a pre-test and post-test control group design. The sample, selected by cluster random sampling resulted in the selection of 32 second year high school students for each group. The instruments used were…
ERIC Educational Resources Information Center
Lockwood, Nicholas S.
2011-01-01
Geographically dispersed teams rely on information and communication technologies (ICTs) to communicate and collaborate. Three ICTs that have received attention are audio conferencing (AC), video conferencing (VC), and, recently, 3D virtual environments (3D VEs). These ICTs offer modes of communication that differ primarily in the number and type…
76 FR 4110 - Sunshine Act Meeting; FCC To Hold Open Commission Meeting Tuesday, January 25, 2011
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-24
... Tuesday, January 25, 2011 January 18, 2011. The Federal Communications Commission will hold an Open...) 418- 0500; TTY 1-888-835-5322. Audio/Video coverage of the meeting will be broadcast live with open.../ type; digital disk; and audio and video tape. Best Copy and Printing, Inc. may be reached by e-mail at...
Formal Verification of a Power Controller Using the Real-Time Model Checker UPPAAL
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Larsen, Kim Guldstrand; Skou, Arne
1999-01-01
A real-time system for power-down control in audio/video components is modeled and verified using the real-time model checker UPPAAL. The system is supposed to reside in an audio/video component and control (read from and write to) links to neighbor audio/video components such as TV, VCR and remote-control. In particular, the system is responsible for the powering up and down of the component in between the arrival of data, and in order to do so in a safe way without loss of data, it is essential that no link interrupts are lost. Hence, a component system is a multitasking system with hard real-time requirements, and we present techniques for modeling time consumption in such a multitasked, prioritized system. The work has been carried out in a collaboration between Aalborg University and the audio/video company B&O. By modeling the system, 3 design errors were identified and corrected, and the following verification confirmed the validity of the design but also revealed the necessity for an upper limit of the interrupt frequency. The resulting design has been implemented and it is going to be incorporated as part of a new product line.
Gooding, Lori F; Mori-Inoue, Satoko
2011-01-01
The purpose of this study was to examine the effect of video exposure on music therapy students' perceptions of clinical applications of popular music in the field of music therapy. Fifty-one participants were randomly divided into two groups and exposed to a popular song in either audio-only or music video format. Participants were asked to indicate clinical applications; specifically, participants chose: (a) possible population(s), (b) most appropriate population(s), (c) possible age range(s), (d) most appropriate age ranges, (e) possible goal area(s) and (f) most appropriate goal area. Data for each of these categories were compiled and analyzed, with no significant differences found in the choices made by the audio-only and video groups. Three items, (a) selection of the bereavement population, (b) selection of bereavement as the most appropriate population and (c) selection of the age ranges of pre teen/mature adult, were additionally selected for further analysis due to their relationship to the video content. Analysis results revealed a significant difference between the video and audio-only groups for the selection of these specific items, with the video group's selections more closely aligned to the video content. Results of this pilot study suggest that music video exposure to popular music can impact how students choose to implement popular songs in the field of music therapy.
Dual-Layer Video Encryption using RSA Algorithm
NASA Astrophysics Data System (ADS)
Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.
2015-04-01
This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.
Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.
Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan
2018-02-27
In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.
... work or away from work? Top of Page Videos Videos View Low Resolution Video Keep the volume down – Too loud and too ... damage your hearing Audio Description View Low Resolution Video Too loud for too long: Loud noises damage ...
... For Professionals For A.A. Members Now Playing: Videos and Audios I have Hope (PSA) I have ... PSA) My World (PSA) I have Hope (PSA) Videos for Professionals A.A. Video for Healthcare Professionals ...
ERIC Educational Resources Information Center
Qi, Grace Yue; Wang, Yuping
2018-01-01
This study explores the process of Community of Practice (CoP) building for language teachers' professional development through the support of a WeChat group. WeChat is an instant messenger app that provides a multimodal platform for one-on-one and group interactions through text, audio and video. In order to support the implementation of flipped…
Bezanilla, F
1985-03-01
A modified digital audio processor, a video cassette recorder, and some simple added circuitry are assembled into a recording device of high capacity. The unit converts two analog channels into digital form at 44-kHz sampling rate and stores the information in digital form in a common video cassette. Bandwidth of each channel is from direct current to approximately 20 kHz and the dynamic range is close to 90 dB. The total storage capacity in a 3-h video cassette is 2 Gbytes. The information can be retrieved in analog or digital form.
Bezanilla, F
1985-01-01
A modified digital audio processor, a video cassette recorder, and some simple added circuitry are assembled into a recording device of high capacity. The unit converts two analog channels into digital form at 44-kHz sampling rate and stores the information in digital form in a common video cassette. Bandwidth of each channel is from direct current to approximately 20 kHz and the dynamic range is close to 90 dB. The total storage capacity in a 3-h video cassette is 2 Gbytes. The information can be retrieved in analog or digital form. PMID:3978213
NASA Astrophysics Data System (ADS)
Linder, C. A.; Wilbert, M.; Holmes, R. M.
2010-12-01
Multimedia video presentations, which integrate still photographs with video clips, audio interviews, ambient sounds, and music, are an effective and engaging way to tell science stories. In July 2009, Linder joined professors and undergraduates on an expedition to the Kolyma River in northeastern Siberia. This IPY science project, called The Polaris Project (http://www.thepolarisproject.org), is an undergraduate research experience where students and faculty work together to increase our understanding of climate change impacts, including thawing permafrost, in this remote corner of the world. During the summer field season, Linder conducted dozens of interviews, captured over 20,000 still photographs and hours of ambient audio and video clips. Following the 2009 expedition, Linder blended this massive archive of visual and audio information into a 10-minute overview video and five student vignettes. In 2010, Linder again traveled to Siberia as part of the Polaris Project, this time mentoring an environmental journalism student who will lead the production of a video about the 2010 field season. Using examples from the Polaris productions, we will present tips, tools, and techniques for creating compelling multimedia science stories.
Video streaming into the mainstream.
Garrison, W
2001-12-01
Changes in Internet technology are making possible the delivery of a richer mixture of media through data streaming. High-quality, dynamic content, such as video and audio, can be incorporated into Websites simply, flexibly and interactively. Technologies such as G3 mobile communication, ADSL, cable and satellites enable new ways of delivering medical services, information and learning. Systems such as Quicktime, Windows Media and Real Video provide reliable data streams as video-on-demand and users can tailor the experience to their own interests. The Learning Development Centre at the University of Portsmouth have used streaming technologies together with e-learning tools such as dynamic HTML, Flash, 3D objects and online assessment successfully to deliver on-line course content in economics and earth science. The Lifesign project--to develop, catalogue and stream health sciences media for teaching--is described and future medical applications are discussed.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
77 FR 30290 - Sunshine Act Meeting; Open Commission Meeting; Thursday, May 24, 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-22
... contained in section 1.1203 of the Commission's rule, 47 CFR 1.1203, until 5:00 p.m. on Friday, May 18, 2012... Relations, (202) 418- 0500; TTY 1-888-835-5322. Audio/Video coverage of the meeting will be broadcast live.../ type; digital disk; and audio and video tape. Best Copy and Printing, Inc. may be reached by email at...
Design of a video teleconference facility for a synchronous satellite communications link
NASA Technical Reports Server (NTRS)
Richardson, M. D.
1979-01-01
The system requirements, design tradeoffs, and final design of a video teleconference facility are discussed, including proper lighting, graphics transmission, and picture aesthetics. Methods currently accepted in the television broadcast industry are used in the design. The unique problems associated with using an audio channel with a synchronous satellite communications link are discussed, and a final audio system design is presented.
Development and Assessment of Web Courses That Use Streaming Audio and Video Technologies.
ERIC Educational Resources Information Center
Ingebritsen, Thomas S.; Flickinger, Kathleen
Iowa State University, through a program called Project BIO (Biology Instructional Outreach), has been using RealAudio technology for about 2 years in college biology courses that are offered entirely via the World Wide Web. RealAudio is a type of streaming media technology that can be used to deliver audio content and a variety of other media…
Implementing Audio-CASI on Windows’ Platforms
Cooley, Philip C.; Turner, Charles F.
2011-01-01
Audio computer-assisted self interviewing (Audio-CASI) technologies have recently been shown to provide important and sometimes dramatic improvements in the quality of survey measurements. This is particularly true for measurements requiring respondents to divulge highly sensitive information such as their sexual, drug use, or other sensitive behaviors. However, DOS-based Audio-CASI systems that were designed and adopted in the early 1990s have important limitations. Most salient is the poor control they provide for manipulating the video presentation of survey questions. This article reports our experiences adapting Audio-CASI to Microsoft Windows 3.1 and Windows 95 platforms. Overall, our Windows-based system provided the desired control over video presentation and afforded other advantages including compatibility with a much wider array of audio devices than our DOS-based Audio-CASI technologies. These advantages came at the cost of increased system requirements --including the need for both more RAM and larger hard disks. While these costs will be an issue for organizations converting large inventories of PCS to Windows Audio-CASI today, this will not be a serious constraint for organizations and individuals with small inventories of machines to upgrade or those purchasing new machines today. PMID:22081743
Fiber-channel audio video standard for military and commercial aircraft product lines
NASA Astrophysics Data System (ADS)
Keller, Jack E.
2002-08-01
Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.
European Union RACE program contributions to digital audiovisual communications and services
NASA Astrophysics Data System (ADS)
de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric
1995-02-01
The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.
Digital Audio Sampling for Film and Video.
ERIC Educational Resources Information Center
Stanton, Michael J.
Digital audio sampling is explained, and some of its implications in digital sound applications are discussed. Digital sound equipment is rapidly replacing analog recording devices as the state-of-the-art in audio technology. The philosophy of digital recording involves doing away with the continuously variable analog waveforms and turning the…
Telearch - Integrated visual simulation environment for collaborative virtual archaeology.
NASA Astrophysics Data System (ADS)
Kurillo, Gregorij; Forte, Maurizio
Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.
Development and preliminary validation of an interactive remote physical therapy system.
Mishra, Anup K; Skubic, Marjorie; Abbott, Carmen
2015-01-01
In this paper, we present an interactive physical therapy system (IPTS) for remote quantitative assessment of clients in the home. The system consists of two different interactive interfaces connected through a network, for a real-time low latency video conference using audio, video, skeletal, and depth data streams from a Microsoft Kinect. To test the potential of IPTS, experiments were conducted with 5 independent living senior subjects in Kansas City, MO. Also, experiments were conducted in the lab to validate the real-time biomechanical measures calculated using the skeletal data from the Microsoft Xbox 360 Kinect and Microsoft Xbox One Kinect, with ground truth data from a Vicon motion capture system. Good agreements were found in the validation tests. The results show potential capabilities of the IPTS system to provide remote physical therapy to clients, especially older adults, who may find it difficult to visit the clinic.
Enhancing Battlemind: Preventing PTSD by Coping with Intrusive Thoughts
2015-09-01
Characteristics of Participant-Soldiers Demographics Demographic Characteristics N = 1,524 Sex Male 90.6% Female 9.4...consultants • Workshops also included time for live practice, including audio and video taping of trainers’ delivery of modules • One-on-one in person...additional audio/ video taping • Culminated with a certification test in which trainer was rated on multiple domains and content areas by PI, PC, other
Implementation of Video Teleconferencing for the Republic of China Navy
1990-03-01
INTRODUCTION 1 A. BACKGROUND 1 1. Communications Environment, Needs, and Plans 1 2. Republic of China History and Threat of Invasion 4 B ...TELECONFERENCING 8 1. Definition 8 2. History of Video Teleconferencing 9 B . CATEGORIES OF TELECONFERENCING 11 1. Audio Conferencing 11 2. Audio...RELATED TO TELECONFERENCING 52 A. PNTRODUCTION OF HUMAN FACTORS 52 1. Definition of Human Factors 52 2. History of Human Factors 53 IV B
Teaching the Blind to Find Their Way by Playing Video Games
Merabet, Lotfi B.; Connors, Erin C.; Halko, Mark A.; Sánchez, Jaime
2012-01-01
Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world. PMID:23028703
Development of a microportable imaging system for otoscopy and nasoendoscopy evaluations.
VanLue, Michael; Cox, Kenneth M; Wade, James M; Tapp, Kevin; Linville, Raymond; Cosmato, Charlie; Smith, Tom
2007-03-01
Imaging systems for patients with cleft palate typically are not portable, but are essential to obtain an audiovisual record of nasoendoscopy and otoscopy procedures. Practitioners who evaluate patients in rural, remote, or otherwise medically underserved areas are expected to obtain audiovisual recordings of these procedures as part of standard clinical practice. Therefore, patients must travel substantial distances to medical facilities that have standard recording equipment. This project describes the specific components, strengths and weaknesses of an MPEG-4 digital recording system for otoscopy/nasoendoscopy evaluation of patients with cleft palate that is both portable and compatible with store-and-forward telemedicine applications. Three digital recording configurations (TabletPC, handheld digital video recorder, and an 8-mm digital camcorder) were used to record the audio/ video signal from an analog video scope system. The handheld digital video recorder was most effective at capturing audio/video and displaying procedures in real time. The system described was particularly easy to use, because it required no postrecording file capture or compression for later review, transfer, and/or archiving. The handheld digital recording system was assembled from commercially available components. The portability and the telemedicine compatibility of the handheld digital video recorder offers a viable solution for the documentation of nasoendosocopy and otoscopy procedures in remote, rural, or other locations where reduced medical access precludes the use of larger component audio/video systems.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
Witchel, Harry J.; Santos, Carlos P.; Ackah, James K.; Westling, Carina E. I.; Chockalingam, Nachiappan
2016-01-01
Background: Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. Hypotheses: (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Methods: Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. Results: For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. Conclusions: NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus. PMID:26941666
Witchel, Harry J; Santos, Carlos P; Ackah, James K; Westling, Carina E I; Chockalingam, Nachiappan
2016-01-01
Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus.
Walker, H Jack; Feild, Hubert S; Giles, William F; Armenakis, Achilles A; Bernerth, Jeremy B
2009-09-01
This study investigated participants' reactions to employee testimonials presented on recruitment Web sites. The authors manipulated the presence of employee testimonials, richness of media communicating testimonials (video with audio vs. picture with text), and representation of racial minorities in employee testimonials. Participants were more attracted to organizations and perceived information as more credible when testimonials were included on recruitment Web sites. Testimonials delivered via video with audio had higher attractiveness and information credibility ratings than those given via picture with text. Results also showed that Blacks responded more favorably, whereas Whites responded more negatively, to the recruiting organization as the proportion of minorities shown giving testimonials on the recruitment Web site increased. However, post hoc analyses revealed that use of a richer medium (video with audio vs. picture with text) to communicate employee testimonials tended to attenuate these racial effects.
Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini
2013-01-01
We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively. PMID:25300451
Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini
2012-01-01
We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively.
Video Pedagogy as Political Activity.
ERIC Educational Resources Information Center
Higgins, John W.
1991-01-01
Asserts that the education of students in the technology of video and audio production is a political act. Discusses the structure and style of production, and the ideologies and values contained therein. Offers alternative approaches to critical video pedagogy. (PRA)
Multimedia Instruction Puts Teachers in the Director's Chair.
ERIC Educational Resources Information Center
Trotter, Andrew
1990-01-01
Teachers can produce and direct their own instructional videos using computer-driven multimedia. Outlines the basics in combining audio and video technologies to produce videotapes that mix animated and still graphics, sound, and full-motion video. (MLF)
(abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, Kenneth C.
1994-01-01
We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.
Subjective video quality evaluation of different content types under different impairments
NASA Astrophysics Data System (ADS)
Pozueco, Laura; Álvarez, Alberto; García, Xabiel; García, Roberto; Melendi, David; Díaz, Gabriel
2017-01-01
Nowadays, access to multimedia content is one of the most demanded services on the Internet. However, the transmission of audio and video over these networks is not free of problems that negatively affect user experience. Factors such as low image quality, cuts during playback or losses of audio or video, among others, can occur and there is no clear idea about the level of distortion introduced in the perceived quality. For that reason, different impairments should be evaluated based on user opinions, with the aim of analyzing the impact in the perceived quality. In this work, we carried out a subjective evaluation of different types of impairments with different types of contents, including news, cartoons, sports and action movies. A total of 100 individuals, between the ages of 20 and 68, participated in the subjective study. Results show that short-term rebuffering events negatively affect the quality of experience and that desynchronization between audio and video is the least annoying impairment. Moreover, we found that the content type determines the subjective results according to the impairment present during the playback.
Through the Looking Glass: The Multiple Layers of Multimedia.
ERIC Educational Resources Information Center
D'Ignazio, Fred
1990-01-01
Describes possible future uses of multimedia computers for instructional applications. Highlights include databases; publishing; telecommunications; computers and videocassette recorders (VCRs); audio and video digitizing; video overlay, or genlock; still-image video; videodiscs and CD-ROM; and hypermedia. (LRW)
ERIC Educational Resources Information Center
Song, Yaxiao
2010-01-01
Video surrogates can help people quickly make sense of the content of a video before downloading or seeking more detailed information. Visual and audio features of a video are primary information carriers and might become important components of video retrieval and video sense-making. In the past decades, most research and development efforts on…
Podcasting by Synchronising PowerPoint and Voice: What Are the Pedagogical Benefits?
ERIC Educational Resources Information Center
Griffin, Darren K.; Mitchell, David; Thompson, Simon J.
2009-01-01
The purpose of this study was to investigate the efficacy of audio-visual synchrony in podcasting and its possible pedagogical benefits. "Synchrony" in this study refers to the simultaneous playback of audio and video data streams, so that the transitions between presentation slides occur at "lecturer chosen" points in the audio commentary.…
16 CFR 308.3 - Advertising of pay-per-call services.
Code of Federal Regulations, 2011 CFR
2011-01-01
... same video as a commercially-prepared video directed primarily to individuals under 18, or preceding a... the same language as that principally used in the advertisement. (2) Television video and print... required disclosures shall be used in any advertisement in any medium; nor shall any audio, video or print...
16 CFR 308.3 - Advertising of pay-per-call services.
Code of Federal Regulations, 2013 CFR
2013-01-01
... same video as a commercially-prepared video directed primarily to individuals under 18, or preceding a... the same language as that principally used in the advertisement. (2) Television video and print... required disclosures shall be used in any advertisement in any medium; nor shall any audio, video or print...
16 CFR 308.3 - Advertising of pay-per-call services.
Code of Federal Regulations, 2012 CFR
2012-01-01
... same video as a commercially-prepared video directed primarily to individuals under 18, or preceding a... the same language as that principally used in the advertisement. (2) Television video and print... required disclosures shall be used in any advertisement in any medium; nor shall any audio, video or print...
16 CFR 308.3 - Advertising of pay-per-call services.
Code of Federal Regulations, 2010 CFR
2010-01-01
... same video as a commercially-prepared video directed primarily to individuals under 18, or preceding a... the same language as that principally used in the advertisement. (2) Television video and print... required disclosures shall be used in any advertisement in any medium; nor shall any audio, video or print...
16 CFR 308.3 - Advertising of pay-per-call services.
Code of Federal Regulations, 2014 CFR
2014-01-01
... same video as a commercially-prepared video directed primarily to individuals under 18, or preceding a... the same language as that principally used in the advertisement. (2) Television video and print... required disclosures shall be used in any advertisement in any medium; nor shall any audio, video or print...
Military Review: The Professional Journal of the U.S. Army. January-February 2002
2002-02-01
Internet.”9 He accuses bin Laden of hiding maps and photos of targets and of posting instructions on sports chat rooms, porno - graphic bulletin boards...anything unusual. Messages can be hidden in audio, video , or still image files, with information stored in the least significant bits of a digitized file...steganography, embedding secret messages in other messages to prevent observers from suspecting anything unusual. Messages can be hidden in audio, video , or
The implementation of Project-Based Learning in courses Audio Video to Improve Employability Skills
NASA Astrophysics Data System (ADS)
Sulistiyo, Edy; Kustono, Djoko; Purnomo; Sutaji, Eddy
2018-04-01
This paper presents a project-based learning (PjBL) in subjects with Audio Video the Study Programme Electro Engineering Universitas Negeri Surabaya which consists of two ways namely the design of the prototype audio-video and assessment activities project-based learning tailored to the skills of the 21st century in the form of employability skills. The purpose of learning innovation is applying the lab work obtained in the theory classes. The PjBL aims to motivate students, centering on the problems of teaching in accordance with the world of work. Measures of learning include; determine the fundamental questions, designs, develop a schedule, monitor the learners and progress, test the results, evaluate the experience, project assessment, and product assessment. The results of research conducted showed the level of mastery of the ability to design tasks (of 78.6%), technical planning (39,3%), creativity (42,9%), innovative (46,4%), problem solving skills (the 57.1%), skill to communicate (75%), oral expression (75%), searching and understanding information (to 64.3%), collaborative work skills (71,4%), and classroom conduct (of 78.6%). In conclusion, instructors have to do the reflection and make improvements in some of the aspects that have a level of mastery of the skills less than 60% both on the application of project-based learning courses, audio video.
Code of Federal Regulations, 2014 CFR
2014-07-01
... AND VIDEO RECORDINGS § 142.4 Procedures. (a) Permission or licenses from copyright owners shall be obtained for public performance of copyrighted sound and video recordings. (b) Component procedures... would be considered a public performance. (c) Government audio and video duplicating equipment and...
Tape recorder effects on jitter and shimmer extraction.
Doherty, E T; Shipp, T
1988-09-01
To test for possible contamination of acoustic analyses by record/reproduce systems, five sine waves of fixed frequency and amplitude were sampled directly by a computer and recorded simultaneously on four different tape formats (audio and FM reel-to-reel, audio cassette, and video cassette using pulse code modulation). Recordings were digitized on playback and with the direct samples analyzed for fundamental frequency, amplitude, jitter, and shimmer using a zero crossing interpolation scheme. Distortion introduced by any of the data acquisition systems is negligible when extracting average fundamental frequency or average amplitude. For jitter and shimmer estimation, direct sampling or the use of a video cassette recorder with pulse code modulation are clearly superior. FM recorders, although not quite as accurate, provide a satisfactory alternative to those methods. Audio reel-to-reel recordings are marginally adequate for jitter analysis whereas audio cassette recorders can introduce jitter and shimmer values that are greater than some reported values for normal talkers.
NFL Films audio, video, and film production facilities
NASA Astrophysics Data System (ADS)
Berger, Russ; Schrag, Richard C.; Ridings, Jason J.
2003-04-01
The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.
Transmission of live laparoscopic surgery over the Internet2.
Damore, L J; Johnson, J A; Dixon, R S; Iverson, M A; Ellison, E C; Melvin, W S
1999-11-01
Video broadcasting of surgical procedures is an important tool for education, training, and consultation. Current video conferencing systems are expensive and time-consuming and require preplanning. Real-time Internet video is known for its poor quality and relies on the equipment and the speed of the connection. The Internet2, a new high-speed (up to 2,048 Mbps), large bandwidth data network presently connects more than 100 universities and corporations. We have successfully used the Internet2 to broadcast the first real-time, high-quality audio/video program from a live laparoscopic operation to distant points. Video output of the laparoscopic camera and audio from a wireless microphone were broadcast to distant sites using a proprietary, PC-based implementation of H.320 video conferencing over a TCP/IP network connected to the Internet2. The receiving sites participated in two-way, real-time video and audio communications and graded the quality of the signal they received. On August 25, 1998, a laparoscopic Nissen fundoplication was transmitted to Internet2 stations in Colorado, Pennsylvania, and to an Internet station in New York. On September 28 and 29, 1998, we broadcast laparoscopic operations throughout both days to the Internet2 Fall Conference in San Francisco, California. Most recently, on February 24, 1999, we transmitted a laparoscopic Heller myotomy to the Abilene Network Launch Event in Washington, DC. The Internet2 is currently able to provide the bandwidth needed for a turn-key video conferencing system with high-resolution, real-time transmission. The system could be used for a variety of teaching and educational programs for experienced surgeons, residents, and medical students.
Digital video technology, today and tomorrow
NASA Astrophysics Data System (ADS)
Liberman, J.
1994-10-01
Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.
47 CFR 76.62 - Manner of carriage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND... broadcast television station carried pursuant to § 76.56 shall include in its entirety the primary video... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.62 - Manner of carriage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND... broadcast television station carried pursuant to § 76.56 shall include in its entirety the primary video... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.62 - Manner of carriage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND... broadcast television station carried pursuant to § 76.56 shall include in its entirety the primary video... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.62 - Manner of carriage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND... broadcast television station carried pursuant to § 76.56 shall include in its entirety the primary video... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
StreaMorph: A Case for Synthesizing Energy-Efficient Adaptive Programs Using High-Level Abstractions
2013-08-12
technique when switching from using eight cores to one core. 1. Introduction Real - time streaming of media data is growing in popularity. This includes...both capture and processing of real - time video and audio, and delivery of video and audio from servers; recent usage number shows over 800 million...source of data, when that source is a real - time source, and it is generally not necessary to get ahead of the sink. Even with real - time sources and sinks
TV audio and video on the same channel
NASA Technical Reports Server (NTRS)
Hopkins, J. B.
1979-01-01
Transmitting technique adds audio to video signal during vertical blanking interval. SIVI (signal in the vertical interval) is used by TV networks and stations to transmit cuing and automatic-switching tone signals to augment automatic and manual operations. It can also be used to transmit one-way instructional information, such as bulletin alerts, program changes, and commercial-cutaway aural cues from the networks to affiliates. Additonally, it can be used as extra sound channel for second-language transmission to biligual stations.
ERIC Educational Resources Information Center
Allen, Keith D.; Burke, Raymond V.; Howard, Monica R.; Wallace, Dustin P.; Bowen, Scott L.
2012-01-01
We evaluated audio cuing to facilitate community employment of individuals with autism and intellectual disability. The job required promoting products in retail stores by wearing an air-inflated WalkAround[R] costume of a popular commercial character. Three adolescents, ages 16-18, were initially trained with video modeling. Audio cuing was then…
Huffman coding in advanced audio coding standard
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Burbank uses video camera during installation and routing of HRCS Video Cables
2012-02-01
ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.66 Satellite... satellite carrier that offers multichannel video programming distribution service in the United States to... entirety the primary video, accompanying audio, and closed captioning data contained in line 21 of the...
Evaluation of architectures for an ASP MPEG-4 decoder using a system-level design methodology
NASA Astrophysics Data System (ADS)
Garcia, Luz; Reyes, Victor; Barreto, Dacil; Marrero, Gustavo; Bautista, Tomas; Nunez, Antonio
2005-06-01
Trends in multimedia consumer electronics, digital video and audio, aim to reach users through low-cost mobile devices connected to data broadcasting networks with limited bandwidth. An emergent broadcasting network is the digital audio broadcasting network (DAB) which provides CD quality audio transmission together with robustness and efficiency techniques to allow good quality reception in motion conditions. This paper focuses on the system-level evaluation of different architectural options to allow low bandwidth digital video reception over DAB, based on video compression techniques. Profiling and design space exploration techniques are applied over the ASP MPEG-4 decoder in order to find out the best HW/SW partition given the application and platform constraints. An innovative SystemC-based system-level design tool, called CASSE, is being used for modelling, exploration and evaluation of different ASP MPEG-4 decoder HW/SW partitions. System-level trade offs and quantitative data derived from this analysis are also presented in this work.
Orfanos, Stavros; Akther, Syeda Ferhana; Abdul-Basit, Muhammad; McCabe, Rosemarie; Priebe, Stefan
2017-02-10
Research has shown that interactions in group therapies for people with schizophrenia are associated with a reduction in negative symptoms. However, it is unclear which specific interactions in groups are linked with these improvements. The aims of this exploratory study were to i) develop and test the reliability of using video-annotation software to measure interactions in group therapies in schizophrenia and ii) explore the relationship between interactions in group therapies for schizophrenia with clinically relevant changes in negative symptoms. Video-annotation software was used to annotate interactions from participants selected across nine video-recorded out-patient therapy groups (N = 81). Using the Individual Group Member Interpersonal Process Scale, interactions were coded from participants who demonstrated either a clinically significant improvement (N = 9) or no change (N = 8) in negative symptoms at the end of therapy. Interactions were measured from the first and last sessions of attendance (>25 h of therapy). Inter-rater reliability between two independent raters was measured. Binary logistic regression analysis was used to explore the association between the frequency of interactive behaviors and changes in negative symptoms, assessed using the Positive and Negative Syndrome Scale. Of the 1275 statements that were annotated using ELAN, 1191 (93%) had sufficient audio and visual quality to be coded using the Individual Group Member Interpersonal Process Scale. Rater-agreement was high across all interaction categories (>95% average agreement). A higher frequency of self-initiated statements measured in the first session was associated with improvements in negative symptoms. The frequency of questions and giving advice measured in the first session of attendance was associated with improvements in negative symptoms; although this was only a trend. Video-annotation software can be used to reliably identify interactive behaviors in groups for schizophrenia. The results suggest that proactive communicative gestures, as assessed by the video-analysis, predict outcomes. Future research should use this novel method in larger and clinically different samples to explore which aspects of therapy facilitate such proactive communication early on in therapy.
Connors, Erin C; Chrastil, Elizabeth R; Sánchez, Jaime; Merabet, Lotfi B
2014-01-01
For profoundly blind individuals, navigating in an unfamiliar building can represent a significant challenge. We investigated the use of an audio-based, virtual environment called Audio-based Environment Simulator (AbES) that can be explored for the purposes of learning the layout of an unfamiliar, complex indoor environment. Furthermore, we compared two modes of interaction with AbES. In one group, blind participants implicitly learned the layout of a target environment while playing an exploratory, goal-directed video game. By comparison, a second group was explicitly taught the same layout following a standard route and instructions provided by a sighted facilitator. As a control, a third group interacted with AbES while playing an exploratory, goal-directed video game however, the explored environment did not correspond to the target layout. Following interaction with AbES, a series of route navigation tasks were carried out in the virtual and physical building represented in the training environment to assess the transfer of acquired spatial information. We found that participants from both modes of interaction were able to transfer the spatial knowledge gained as indexed by their successful route navigation performance. This transfer was not apparent in the control participants. Most notably, the game-based learning strategy was also associated with enhanced performance when participants were required to find alternate routes and short cuts within the target building suggesting that a ludic-based training approach may provide for a more flexible mental representation of the environment. Furthermore, outcome comparisons between early and late blind individuals suggested that greater prior visual experience did not have a significant effect on overall navigation performance following training. Finally, performance did not appear to be associated with other factors of interest such as age, gender, and verbal memory recall. We conclude that the highly interactive and immersive exploration of the virtual environment greatly engages a blind user to develop skills akin to positive near transfer of learning. Learning through a game play strategy appears to confer certain behavioral advantages with respect to how spatial information is acquired and ultimately manipulated for navigation.
Connors, Erin C.; Chrastil, Elizabeth R.; Sánchez, Jaime; Merabet, Lotfi B.
2014-01-01
For profoundly blind individuals, navigating in an unfamiliar building can represent a significant challenge. We investigated the use of an audio-based, virtual environment called Audio-based Environment Simulator (AbES) that can be explored for the purposes of learning the layout of an unfamiliar, complex indoor environment. Furthermore, we compared two modes of interaction with AbES. In one group, blind participants implicitly learned the layout of a target environment while playing an exploratory, goal-directed video game. By comparison, a second group was explicitly taught the same layout following a standard route and instructions provided by a sighted facilitator. As a control, a third group interacted with AbES while playing an exploratory, goal-directed video game however, the explored environment did not correspond to the target layout. Following interaction with AbES, a series of route navigation tasks were carried out in the virtual and physical building represented in the training environment to assess the transfer of acquired spatial information. We found that participants from both modes of interaction were able to transfer the spatial knowledge gained as indexed by their successful route navigation performance. This transfer was not apparent in the control participants. Most notably, the game-based learning strategy was also associated with enhanced performance when participants were required to find alternate routes and short cuts within the target building suggesting that a ludic-based training approach may provide for a more flexible mental representation of the environment. Furthermore, outcome comparisons between early and late blind individuals suggested that greater prior visual experience did not have a significant effect on overall navigation performance following training. Finally, performance did not appear to be associated with other factors of interest such as age, gender, and verbal memory recall. We conclude that the highly interactive and immersive exploration of the virtual environment greatly engages a blind user to develop skills akin to positive near transfer of learning. Learning through a game play strategy appears to confer certain behavioral advantages with respect to how spatial information is acquired and ultimately manipulated for navigation. PMID:24822044
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
Secure video communications system
Smith, Robert L.
1991-01-01
A secure video communications system having at least one command network formed by a combination of subsystems. The combination of subsystems to include a video subsystem, an audio subsystem, a communications subsystem, and a control subsystem. The video communications system to be window driven and mouse operated, and having the ability to allow for secure point-to-point real-time teleconferencing.
Going Pro: Schools Embrace Video Production and Videoconferencing
ERIC Educational Resources Information Center
Stearns, Jared
2006-01-01
K-12 schools are broadening their curriculum offerings to include audio, video, and other multimodal styles of communication. A combination of savvy digital natives, affordable software, and online tutoring has created a perfect opportunity to integrate professional level video and videoconferencing into curricula. Educators are also finding…
Shenai, Mahesh B; Tubbs, R Shane; Guthrie, Barton L; Cohen-Gadol, Aaron A
2014-08-01
The shortage of surgeons compels the development of novel technologies that geographically extend the capabilities of individual surgeons and enhance surgical skills. The authors have developed "Virtual Interactive Presence" (VIP), a platform that allows remote participants to simultaneously view each other's visual field, creating a shared field of view for real-time surgical telecollaboration. The authors demonstrate the capability of VIP to facilitate long-distance telecollaboration during cadaveric dissection. Virtual Interactive Presence consists of local and remote workstations with integrated video capture devices and video displays. Each workstation mutually connects via commercial teleconferencing devices, allowing worldwide point-to-point communication. Software composites the local and remote video feeds, displaying a hybrid perspective to each participant. For demonstration, local and remote VIP stations were situated in Indianapolis, Indiana, and Birmingham, Alabama, respectively. A suboccipital craniotomy and microsurgical dissection of the pineal region was performed in a cadaveric specimen using VIP. Task and system performance were subjectively evaluated, while additional video analysis was used for objective assessment of delay and resolution. Participants at both stations were able to visually and verbally interact while identifying anatomical structures, guiding surgical maneuvers, and discussing overall surgical strategy. Video analysis of 3 separate video clips yielded a mean compositing delay of 760 ± 606 msec (when compared with the audio signal). Image resolution was adequate to visualize complex intracranial anatomy and provide interactive guidance. Virtual Interactive Presence is a feasible paradigm for real-time, long-distance surgical telecollaboration. Delay, resolution, scaling, and registration are parameters that require further optimization, but are within the realm of current technology. The paradigm potentially enables remotely located experts to mentor less experienced personnel located at the surgical site with applications in surgical training programs, remote proctoring for proficiency, and expert support for rural settings and across different counties.
Meet David, Our Teacher's Helper.
ERIC Educational Resources Information Center
Newell, William; And Others
1984-01-01
DAVID, Dynamic Audio Video Instructional Device, is composed of a conventional videotape recorder, a microcomputer, and a video controller, and has been successfully used for speech reading and sign language instruction with deaf students. (CL)
Method of assessing parent-child grocery store purchasing interactions using a micro-camcorder.
Calloway, Eric E; Roberts-Gray, Cindy; Ranjit, Nalini; Sweitzer, Sara J; McInnis, Katie A; Romo-Palafox, Maria J; Briley, Margaret E
2014-12-01
The purpose of this study was to assess the validity of using participant worn micro-camcorders (PWMC) to collect data on parent-child food and beverage purchasing interactions in the grocery store. Parent-child dyads (n = 32) were met at their usual grocery store and shopping time. Parents were mostly Caucasian (n = 27, 84.4%), mothers (n = 30, 93.8%). Children were 2-6 years old with 15 girls and 17 boys. A micro-camcorder was affixed to a baseball style hat worn by the child. The dyad proceeded to shop while being shadowed by an in-person observer. Video/audio data were coded for behavioral and environmental variables. The PWMC method was compared to in-person observation to assess sensitivity and relative validity for measuring parent-child interactions, and compared to receipt data to assess criterion validity for evaluating purchasing decisions. Inter-rater reliability for coding video/audio data collected using the PWMC method was also assessed. The PWMC method proved to be more sensitive than in-person observation revealing on average 1.4 (p < 0.01) more parent-child food and beverage purchasing interactions per shopping trip. Inter-rater reliability for coding PWMC data showed moderate to almost perfect agreement (Cohen's kappa = 0.461-0.937). The PWMC method was significantly correlated with in-person observation for measuring occurrences of parent-child food purchasing interactions (rho = 0.911, p < 0.01) and characteristics of those interactions (rho = 0.345-0.850, p < 0.01). Additionally, there was substantial agreement between the PWMC method and receipt data for measuring purchasing decisions (Cohen's kappa = 0.787). The PWMC method proved to be well suited to assess parent-child food and beverage purchasing interactions in the grocery store. Copyright © 2014 Elsevier Ltd. All rights reserved.
The challenges of archiving networked-based multimedia performances (Performance cryogenics)
NASA Astrophysics Data System (ADS)
Cohen, Elizabeth; Cooperstock, Jeremy; Kyriakakis, Chris
2002-11-01
Music archives and libraries have cultural preservation at the core of their charters. New forms of art often race ahead of the preservation infrastructure. The ability to stream multiple synchronized ultra-low latency streams of audio and video across a continent for a distributed interactive performance such as music and dance with high-definition video and multichannel audio raises a series of challenges for the architects of digital libraries and those responsible for cultural preservation. The archiving of such performances presents numerous challenges that go beyond simply recording each stream. Case studies of storage and subsequent retrieval issues for Internet2 collaborative performances are discussed. The development of shared reality and immersive environments generate issues about, What constitutes an archived performance that occurs across a network (in multiple spaces over time)? What are the families of necessary metadata to reconstruct this virtual world in another venue or era? For example, if the network exhibited changes in latency the performers most likely adapted. In a future recreation, the latency will most likely be completely different. We discuss the parameters of immersive environment acquisition and rendering, network architectures, software architecture, musical/choreographic scores, and environmental acoustics that must be considered to address this problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
- PNNL, Harold Trease
2012-10-10
ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.
A microcomputer interface for a digital audio processor-based data recording system.
Croxton, T L; Stump, S J; Armstrong, W M
1987-10-01
An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.
A microcomputer interface for a digital audio processor-based data recording system.
Croxton, T L; Stump, S J; Armstrong, W M
1987-01-01
An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
Usability has been identified as a key issue in health informatics. Worldwide numerous projects have been carried out in an attempt to increase and optimize health system usability. Usability testing, involving observing end users interacting with systems, has been widely applied and numerous publications have appeared describing such studies. However, to date, fewer works have been published describing methodological approaches to analyzing the rich data stream that results from usability testing. This includes analysis of video, audio and screen recordings. In this paper we describe our work in the development and application of a coding scheme for analyzing the usability of health information systems. The phases involved in such analyses are described.
Supervisory Control of Unmanned Vehicles
2010-04-01
than-ideal video quality (Chen et al., 2007; Chen and Thropp, 2007). Simpson et al. (2004) proposed using a spatial audio display to augment UAV...operator’s SA and discussed its utility for each of the three SA levels. They recommended that both visual and spatial audio information should be...presented concurrently. They also suggested that presenting the audio information spatially may enhance UAV operator’s sense of presence (i.e
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
Photo-acoustic and video-acoustic methods for sensing distant sound sources
NASA Astrophysics Data System (ADS)
Slater, Dan; Kozacik, Stephen; Kelmelis, Eric
2017-05-01
Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.
Losing the Red Pen: Video Grading Feedback in Distance and Blended Learning Writing Courses
ERIC Educational Resources Information Center
Jones, Lisa Ann
2014-01-01
This paper will give a step-by-step demonstration on how to create MP4 files to video-grade undergraduate writing assignments. The process of using prepared rubrics to guide video and audio feedback will be presented and examples shown. This assessment method provides students with personalized video-feedback as a re-usable learning object. The…
NASA Astrophysics Data System (ADS)
Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.
2013-12-01
The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-01-01
Objective Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today’s keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users’ information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. Materials and Methods The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively. Results The authors produced a prototype implementation of the proposed system, which is publicly accessible at https://patentq.njit.edu/oer. To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Conclusion Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986
Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.
2014-01-01
BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648
Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter
2014-12-30
Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.
The Development, Test, and Evaluation of Three Pilot Performance Reference Scales.
ERIC Educational Resources Information Center
Horner, Walter R.; And Others
A set of pilot performance reference scales was developed based upon airborne Audio-Video Recording (AVR) of student performance in T-37 undergraduate Pilot Training. After selection of the training maneuvers to be studied, video tape recordings of the maneuvers were selected from video tape recordings already available from a previous research…
CPFP Video | Cancer Prevention Fellowship Program
The Cancer Prevention Fellowship Program (CPFP) trains future leaders in the field of cancer prevention and control. This video will highlight unique features of the CPFP through testimonials from current fellows and alumni, remarks from the director, and reflections from the Director of the Division of Cancer Prevention, NCI. Audio described version of the CPFP video
Learning Sociolinguistically Appropriate Language through the Video Drama "Connect with English"
ERIC Educational Resources Information Center
Hwang, Caroline C.
2005-01-01
Video provides (1) simultaneous audio/visual input, and (2) complete and contextualized conversations, and thus proves to be a rich vehicle in foreign language instruction. The video drama "Connect with English" (a.k.a. "Rebecca's Dream"), created to promote English language learning, is particularly outstanding in that it contains an captivating…
Video Streaming in Online Learning
ERIC Educational Resources Information Center
Hartsell, Taralynn; Yuen, Steve Chi-Yin
2006-01-01
The use of video in teaching and learning is a common practice in education today. As learning online becomes more of a common practice in education, streaming video and audio will play a bigger role in delivering course materials to online learners. This form of technology brings courses alive by allowing online learners to use their visual and…
BDVC (Bimodal Database of Violent Content): A database of violent audio and video
NASA Astrophysics Data System (ADS)
Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro
2017-09-01
Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.
Wang, Nancy X. R.; Olson, Jared D.; Ojemann, Jeffrey G.; Rao, Rajesh P. N.; Brunton, Bingni W.
2016-01-01
Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018
Intelligent keyframe extraction for video printing
NASA Astrophysics Data System (ADS)
Zhang, Tong
2004-10-01
Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.
Hamdan, Jihad M; Al-Hawamdeh, Rose Fowler
2018-04-10
This empirical study examines the extent to which 'face', i.e. (audio visual dialogues), affects the listening comprehension of advanced Jordanian EFL learners in a TOFEL-like test, as opposed to its absence (i.e. a purely audio test) which is the current norm in many English language proficiency tests, including but not limited to TOFEL iBT, TOEIC and academic IELTS. Through an online experiment, 60 Jordanian postgraduate linguistics and English literature students (advanced EFL learners) at the University of Jordan sit for two listening tests (simulating English proficiency tests); namely, one which is purely audio [i.e. without any face (including any visuals such as motion, as well as still pictures)], and one which is audiovisual/video. The results clearly show that the inclusion of visuals enhances subjects' performance in listening tests. It is concluded that since the aim of English proficiency tests such as TOEFL iBT is to qualify or disqualify subjects to work and study in western English-speaking countries, the exclusion of visuals is unfounded. In actuality, most natural interaction includes visibility of the interlocutors involved, and hence test takers who sit purely audio proficiency tests in English or any other language are placed at a disadvantage.
The Audio-Visual Marketing Handbook for Independent Schools.
ERIC Educational Resources Information Center
Griffith, Tom
This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…
Audio Visual Technology and the Teaching of Foreign Languages.
ERIC Educational Resources Information Center
Halbig, Michael C.
Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…
NASA Astrophysics Data System (ADS)
Pallone, Arthur
Necessity often leads to inspiration. Such was the case when a traditional amplifier quit working during the collection of an alpha particle spectrum. I had a 15 battery-powered audio amplifier in my box of project electronics so I connected it between the preamplifier and the multichannel analyzer. The alpha particle spectrum that appeared on the computer screen matched expectations even without correcting for impedance mismatches. Encouraged by this outcome, I have begun to systematically replace each of the parts in a traditional charged particle spectrometer with audio and video components available through consumer electronics stores with the goal of producing an inexpensive charged particle spectrometer for use in education and research. Hopefully my successes, setbacks, and results to date described in this presentation will inform and inspire others.
Code of Federal Regulations, 2010 CFR
2010-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2011 CFR
2011-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Code of Federal Regulations, 2014 CFR
2014-01-01
... carriers must use an equivalent non-video alternative for transmitting the briefing to passengers with... audio-visual displays played on aircraft for informational purposes that were created under your control...
Data streaming in telepresence environments.
Lamboray, Edouard; Würmlin, Stephan; Gross, Markus
2005-01-01
In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system-the blue-c-we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.
Automatic violence detection in digital movies
NASA Astrophysics Data System (ADS)
Fischer, Stephan
1996-11-01
Research on computer-based recognition of violence is scant. We are working on the automatic recognition of violence in digital movies, a first step towards the goal of a computer- assisted system capable of protecting children against TV programs containing a great deal of violence. In the video domain a collision detection and a model-mapping to locate human figures are run, while the creation and comparison of fingerprints to find certain events are run int he audio domain. This article centers on the recognition of fist- fights in the video domain and on the recognition of shots, explosions and cries in the audio domain.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
Video Tutorial of Continental Food
NASA Astrophysics Data System (ADS)
Nurani, A. S.; Juwaedah, A.; Mahmudatussa'adah, A.
2018-02-01
This research is motivated by the belief in the importance of media in a learning process. Media as an intermediary serves to focus on the attention of learners. Selection of appropriate learning media is very influential on the success of the delivery of information itself both in terms of cognitive, affective and skills. Continental food is a course that studies food that comes from Europe and is very complex. To reduce verbalism and provide more real learning, then the tutorial media is needed. Media tutorials that are audio visual can provide a more concrete learning experience. The purpose of this research is to develop tutorial media in the form of video. The method used is the development method with the stages of analyzing the learning objectives, creating a story board, validating the story board, revising the story board and making video tutorial media. The results show that the making of storyboards should be very thorough, and detailed in accordance with the learning objectives to reduce errors in video capture so as to save time, cost and effort. In video capturing, lighting, shooting angles, and soundproofing make an excellent contribution to the quality of tutorial video produced. In shooting should focus more on tools, materials, and processing. Video tutorials should be interactive and two-way.
Content-based intermedia synchronization
NASA Astrophysics Data System (ADS)
Oh, Dong-Young; Sampath-Kumar, Srihari; Rangan, P. Venkat
1995-03-01
Inter-media synchronization methods developed until now have been based on syntactic timestamping of video frames and audio samples. These methods are not fully appropriate for the synchronization of multimedia objects which may have to be accessed individually by their contents, e.g. content-base data retrieval. We propose a content-based multimedia synchronization scheme in which a media stream is viewed as hierarchial composition of smaller objects which are logically structured based on the contents, and the synchronization is achieved by deriving temporal relations among logical units of media object. content-based synchronization offers several advantages such as, elimination of the need for time stamping, freedom from limitations of jitter, synchronization of independently captured media objects in video editing, and compensation for inherent asynchronies in capture times of video and audio.
Evaluating the Use of Problem-Based Video Podcasts to Teach Mathematics in Higher Education
ERIC Educational Resources Information Center
Kay, Robin; Kletskin, Ilona
2012-01-01
Problem-based video podcasts provide short, web-based, audio-visual explanations of how to solve specific procedural problems in subject areas such as mathematics or science. A series of 59 problem-based video podcasts covering five key areas (operations with functions, solving equations, linear functions, exponential and logarithmic functions,…
Using Videos and Multimodal Discourse Analysis to Study How Students Learn a Trade
ERIC Educational Resources Information Center
Chan, Selena
2013-01-01
The use of video to assist with ethnographical-based research is not a new phenomenon. Recent advances in technology have reduced the costs and technical expertise required to use videos for gathering research data. Audio-visual records of learning activities as they take place, allow for many non-vocal and inter-personal communication…
Kuipers installs and routes RCS Video Cables in the U.S. Laboratory
2012-02-01
ISS030-E-060117 (1 Feb. 2012) --- In the International Space Station?s Destiny laboratory, European Space Agency astronaut Andre Kuipers, Expedition 30 flight engineer, routes video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.
Code of Federal Regulations, 2014 CFR
2014-10-01
... documents for the bandwidths of the commonly used television systems Number of lines=525; Nominal video bandwidth: 4.2 MHz, Sound carrier relative to video carrier=4.5 MHz 5M75C3F Total vision bandwidth: 5.75 MHz... 6. Composite Emissions Double-sideband, television relay Bn=2C+2M+2D Video limited to 5 MHz, audio...
Code of Federal Regulations, 2010 CFR
2010-10-01
... documents for the bandwidths of the commonly used television systems Number of lines=525; Nominal video bandwidth: 4.2 MHz, Sound carrier relative to video carrier=4.5 MHz 5M75C3F Total vision bandwidth: 5.75 MHz... 6. Composite Emissions Double-sideband, television relay Bn=2C+2M+2D Video limited to 5 MHz, audio...
Code of Federal Regulations, 2012 CFR
2012-10-01
... documents for the bandwidths of the commonly used television systems Number of lines=525; Nominal video bandwidth: 4.2 MHz, Sound carrier relative to video carrier=4.5 MHz 5M75C3F Total vision bandwidth: 5.75 MHz... 6. Composite Emissions Double-sideband, television relay Bn=2C+2M+2D Video limited to 5 MHz, audio...
Code of Federal Regulations, 2011 CFR
2011-10-01
... documents for the bandwidths of the commonly used television systems Number of lines=525; Nominal video bandwidth: 4.2 MHz, Sound carrier relative to video carrier=4.5 MHz 5M75C3F Total vision bandwidth: 5.75 MHz... 6. Composite Emissions Double-sideband, television relay Bn=2C+2M+2D Video limited to 5 MHz, audio...
Code of Federal Regulations, 2013 CFR
2013-10-01
... documents for the bandwidths of the commonly used television systems Number of lines=525; Nominal video bandwidth: 4.2 MHz, Sound carrier relative to video carrier=4.5 MHz 5M75C3F Total vision bandwidth: 5.75 MHz... 6. Composite Emissions Double-sideband, television relay Bn=2C+2M+2D Video limited to 5 MHz, audio...
VideoBeam portable laser communicator
NASA Astrophysics Data System (ADS)
Mecherle, G. Stephen; Holcomb, Terry L.
1999-01-01
A VideoBeamTM portable laser communicator has been developed which provides full duplex communication links consisting of high quality analog video and stereo audio. The 3.2-pound unit resembles a binocular-type form factor and has an operational range of over two miles (clear air) with excellent jam-resistance and low probability of interception characteristics. The VideoBeamTM unit is ideally suited for numerous military scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications.
Video2vec Embeddings Recognize Events When Examples Are Scarce.
Habibian, Amirhossein; Mensink, Thomas; Snoek, Cees G M
2017-10-01
This paper aims for event recognition when video examples are scarce or even completely absent. The key in such a challenging setting is a semantic video representation. Rather than building the representation from individual attribute detectors and their annotations, we propose to learn the entire representation from freely available web videos and their descriptions using an embedding between video features and term vectors. In our proposed embedding, which we call Video2vec, the correlations between the words are utilized to learn a more effective representation by optimizing a joint objective balancing descriptiveness and predictability. We show how learning the Video2vec embedding using a multimodal predictability loss, including appearance, motion and audio features, results in a better predictable representation. We also propose an event specific variant of Video2vec to learn a more accurate representation for the words, which are indicative of the event, by introducing a term sensitive descriptiveness loss. Our experiments on three challenging collections of web videos from the NIST TRECVID Multimedia Event Detection and Columbia Consumer Videos datasets demonstrate: i) the advantages of Video2vec over representations using attributes or alternative embeddings, ii) the benefit of fusing video modalities by an embedding over common strategies, iii) the complementarity of term sensitive descriptiveness and multimodal predictability for event recognition. By its ability to improve predictability of present day audio-visual video features, while at the same time maximizing their semantic descriptiveness, Video2vec leads to state-of-the-art accuracy for both few- and zero-example recognition of events in video.
Cartreine, James Albert; Locke, Steven E; Buckey, Jay C; Sandoval, Luis; Hegel, Mark T
2012-09-25
Computer-automated depression interventions rely heavily on users reading text to receive the intervention. However, text-delivered interventions place a burden on persons with depression and convey only verbal content. The primary aim of this project was to develop a computer-automated treatment for depression that is delivered via interactive media technology. By using branching video and audio, the program simulates the experience of being in therapy with a master clinician who provides six sessions of problem-solving therapy. A secondary objective was to conduct a pilot study of the program's usability, acceptability, and credibility, and to obtain an initial estimate of its efficacy. The program was produced in a professional multimedia production facility and incorporates video, audio, graphics, animation, and text. Failure analyses of patient data are conducted across sessions and across problems to identify ways to help the user improve his or her problem solving. A pilot study was conducted with persons who had minor depression. An experimental group (n = 7) used the program while a waitlist control group (n = 7) was provided with no treatment for 6 weeks. All of the experimental group participants completed the trial, whereas 1 from the control was lost to follow-up. Experimental group participants rated the program high on usability, acceptability, and credibility. The study was not powered to detect clinical improvement, although these pilot data are encouraging. Although the study was not powered to detect treatment effects, participants did find the program highly usable, acceptable, and credible. This suggests that the highly interactive and immersive nature of the program is beneficial. Further clinical trials are warranted. ClinicalTrials.gov NCT00906581; http://clinicaltrials.gov/ct2/show/NCT00906581 (Archived by WebCite at http://www.webcitation.org/6A5Ni5HUp).
NASA Astrophysics Data System (ADS)
Ayala, Vivian Luz
In today's schools there are by far more students identified with learning disabilities (LD) than with any other disability. The U.S. Department of Education in the year 1997--98 reported that there are 38.13% students with LD in our nations' schools (Smith, Polloway, Patton, & Dowdy, 2001; U.S. Department of Education, 1999). Of those, 1,198,200 are considered ELLs with LD (Baca & Cervantes. 1998). These figures which represent an increase evidence the need to provide these students with educational experiences geared to address both their academic and language needs (Ortiz, 1997; Ortiz, & Garcia, 1995). English language learners with LD must be provided with experiences in the least restrictive environment (LRE) and must be able to share the same kind of social and academic experiences as those students from the general population (Etscheidt & Bartlett, 1999; Lloyd, Kameenui, & Chard, 1997) The purpose of this research was to conduct a detailed qualitative study on classroom interactions to enhance the understanding of the science curriculum in order to foster the understanding of content and facilitate the acquisition of English as a second language (Cummins, 2000; Echevarria, Vogt, & Short, 2000). This study was grounded on the theories of socioconstructivism, second language acquisition, comprehensible input, and classroom interactions. The participants of the study were fourth and fifth grade ELLS with LD in a science elementary school bilingual inclusive setting. Data was collected through observations, semi-structured interviews (students and teacher), video and audio taping, field notes, document analysis, and the Classroom Observation Schedule (COS). The transcriptions of the video and audio tapes were coded to highlight emergent patterns on the type of interactions and language used by the participants. The findings of the study intend to provide information for teachers of ELLs with LD about the implications of using classroom interactions point to: students more actively engaged, an increase in the acquisition of L2, development of science content vocabulary, and a willingness of students to take risks.
The Changing Role of the Educational Video in Higher Distance Education
ERIC Educational Resources Information Center
Laaser, Wolfram; Toloza, Eduardo A.
2017-01-01
The article argues that the ongoing usage of audio visual media is falling behind in terms of educational quality compared to prior achievements in the history of distance education. After reviewing some important steps and experiences of audio visual digital media development, we analyse predominant presentation formats on the Web. Special focus…
Home telecare system using cable television plants--an experimental field trial.
Lee, R G; Chen, H S; Lin, C C; Chang, K C; Chen, J H
2000-03-01
To solve the inconvenience of routine transportation of chronically ill and handicapped patients, this paper proposes a platform based on a hybrid fiber coaxial (HFC) network in Taiwan designed to make a home telecare system feasible. The aim of this home telecare system is to combine biomedical data, including three-channel electrocardiogram (ECG) and blood pressure (BP), video, and audio into a National Television Standard Committee (NTSC) channel for communication between the patient and healthcare provider. Digitized biomedical data and output from medical devices can be further modulated to a second audio program (SAP) subchannel which can be used for second-language audio in NTSC television signals. For long-distance transmission, we translate the digital biomedical data into the frequency domain using frequency shift key (FSK) technology and insert this signal into an SAP band. The whole system has been implemented and tested. The results obtained using this system clearly demonstrated that real-time video, audio, and biomedical data transmission are very clear with a carrier-to-noise ratio up to 43 dB.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
World Key Information Service System Designed For EPCOT Center
NASA Astrophysics Data System (ADS)
Kelsey, J. A.
1984-03-01
An advanced Bell Laboratories and Western Electric designed electronic information retrieval system utilizing the latest Information Age technologies, and a fiber optic transmission system is featured at the Walt Disney World Resort's newest theme park - The Experimental Prototype Community of Tomorrow (EPCOT Center). The project is an interactive audio, video and text information system that is deployed at key locations within the park. The touch sensitive terminals utilizing the ARIEL (Automatic Retrieval of Information Electronically) System is interconnected by a Western Electric designed and manufactured lightwave transmission system.
From watermarking to in-band enrichment: future trends
NASA Astrophysics Data System (ADS)
Mitrea, M.; Prêteux, F.
2009-02-01
Coming across with the emerging Knowledge Society, the enriched video is nowadays a hot research topic, from both academic and industrial perspectives. The principle consists in associating to the video stream some metadata of various types (textual, audio, video, executable codes, ...). This new content is to be further exploited in a large variety of applications, like interactive DTV, games, e-learning, and data mining, for instance. This paper brings into evidence the potentiality of the watermarking techniques for such an application. By inserting the enrichment data into the very video to be enriched, three main advantages are ensured. First, no additional complexity is required from the terminal and the representation format point of view. Secondly, no backward compatibility issue is encountered, thus allowing a unique system to accommodate services from several generations. Finally, the network adaptation constraints are alleviated. The discussion is structured on both theoretical aspects (the accurate evaluation of the watermarking capacity in several reallife scenarios) as well as on applications developed under the framework of the R&D contracts conducted at the ARTEMIS Department.
Computationally Efficient Clustering of Audio-Visual Meeting Data
NASA Astrophysics Data System (ADS)
Hung, Hayley; Friedland, Gerald; Yeo, Chuohao
This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.
NASA Technical Reports Server (NTRS)
1998-01-01
Crystal River Engineering was originally featured in Spinoff 1992 with the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. The Convolvotron was developed for Ames' research on virtual acoustic displays. Crystal River is a now a subsidiary of Aureal Semiconductor, Inc. and they together develop and market the technology, which is a 3-D (three dimensional) audio technology known commercially today as Aureal 3D (A-3D). The technology has been incorporated into video games, surround sound systems, and sound cards.
Content-based analysis of news video
NASA Astrophysics Data System (ADS)
Yu, Junqing; Zhou, Dongru; Liu, Huayong; Cai, Bo
2001-09-01
In this paper, we present a schema for content-based analysis of broadcast news video. First, we separate commercials from news using audiovisual features. Then, we automatically organize news programs into a content hierarchy at various levels of abstraction via effective integration of video, audio, and text data available from the news programs. Based on these news video structure and content analysis technologies, a TV news video Library is generated, from which users can retrieve definite news story according to their demands.
Ranking Highlights in Personal Videos by Analyzing Edited Videos.
Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve
2016-11-01
We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.
ERIC Educational Resources Information Center
Batty, Aaron Olaf
2015-01-01
The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item…
Audiovisual focus of attention and its application to Ultra High Definition video compression
NASA Astrophysics Data System (ADS)
Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj
2014-02-01
Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
NASA Astrophysics Data System (ADS)
Shin, Sanghyun
The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm based on signal processing and abrupt change detection techniques is proposed to identify types of warning alarms and to detect the occurrence times of individual alarms from cockpit audio data. Those proposed algorithms are then successfully applied to simulated and real helicopter cockpit video data to demonstrate and validate their performance.
NASA Astrophysics Data System (ADS)
Tene, Yair; Tene, Noam; Tene, G.
1993-08-01
An interactive data fusion methodology of video, audio, and nonlinear structural dynamic analysis for potential application in forensic engineering is presented. The methodology was developed and successfully demonstrated in the analysis of heavy transportable bridge collapse during preparation for testing. Multiple bridge elements failures were identified after the collapse, including fracture, cracks and rupture of high performance structural materials. Videotape recording by hand held camcorder was the only source of information about the collapse sequence. The interactive data fusion methodology resulted in extracting relevant information form the videotape and from dynamic nonlinear structural analysis, leading to full account of the sequence of events during the bridge collapse.
NASA Astrophysics Data System (ADS)
Guidang, Excel Philip B.; Llanda, Christopher John R.; Palaoag, Thelma D.
2018-03-01
Face Detection Technique as a strategy in controlling a multimedia instructional material was implemented in this study. Specifically, it achieved the following objectives: 1) developed a face detection application that controls an embedded mother-tongue-based instructional material for face-recognition configuration using Python; 2) determined the perceptions of the students using the Mutt Susan’s student app review rubric. The study concludes that face detection technique is effective in controlling an electronic instructional material. It can be used to change the method of interaction of the student with an instructional material. 90% of the students perceived the application to be a great app and 10% rated the application to be good.
Delivery Systems for Distance Education. ERIC Digest.
ERIC Educational Resources Information Center
Schamber, Linda
This ERIC digest provides a brief overview of the video, audio, and computer technologies that are currently used to deliver instruction for distance education programs. The video systems described include videoconferencing, low-power television (LPTV), closed-circuit television (CCTV), instructional fixed television service (ITFS), and cable…
Preplanning and Evaluating Video Documentaries and Features.
ERIC Educational Resources Information Center
Maynard, Riley
1997-01-01
This article presents a ten-part pre-production outline and post-production evaluation that helps communications students more effectively improve video skills. Examines camera movement and motion, camera angle and perspective, lighting, audio, graphics, backgrounds and color, special effects, editing, transitions, and music. Provides a glossary…
Benchmarking multimedia performance
NASA Astrophysics Data System (ADS)
Zandi, Ahmad; Sudharsanan, Subramania I.
1998-03-01
With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.
NFL Films music scoring stage and control room space
NASA Astrophysics Data System (ADS)
Berger, Russ; Schrag, Richard C.; Ridings, Jason J.
2003-04-01
NFL Films' new 200,000 sq. ft. corporate headquarters is home to an orchestral scoring stage used to record custom music scores to support and enhance their video productions. Part of the 90,000 sq. ft. of sound critical technical space, the music scoring stage and its associated control room are at the heart of the audio facilities. Driving the design were the owner's mandate for natural light, wood textures, and an acoustical environment that would support small rhythm sections, soloists, and a full orchestra. Being an industry leader in cutting-edge video and audio formats, the NFLF required that the technical spaces allow the latest in technology to be continually integrated into the infrastructure. Never was it more important for a project to hold true to the adage of ``designing from the inside out.'' Each audio and video space within the facility had to stand on its own with regard to user functionality, acoustical accuracy, sound isolation, noise control, and monitor presentation. A detailed look at the architectural and acoustical design challenges encountered and the solutions developed for the performance studio and the associated control room space will be discussed.
ERIC Educational Resources Information Center
Robinson, David E.
1997-01-01
One solution to poor quality sound in student video projects is a four-track audio cassette recorder. This article discusses the advantages of four-track over single-track recorders and compares two student productions, one using a single-track and the other a four-track recorder. (PEN)
21 CFR 1140.32 - Format and content requirements for labeling and advertising.
Code of Federal Regulations, 2012 CFR
2012-04-01
...: (i) Whose readers younger than 18 years of age constitute 15 percent or less of the total readership... persons younger than 18 years of age as measured by competent and reliable survey evidence. (b) Labeling and advertising in an audio or video format shall be limited as follows: (1) Audio format shall be...
21 CFR 1140.32 - Format and content requirements for labeling and advertising.
Code of Federal Regulations, 2014 CFR
2014-04-01
...: (i) Whose readers younger than 18 years of age constitute 15 percent or less of the total readership... persons younger than 18 years of age as measured by competent and reliable survey evidence. (b) Labeling and advertising in an audio or video format shall be limited as follows: (1) Audio format shall be...
21 CFR 1140.32 - Format and content requirements for labeling and advertising.
Code of Federal Regulations, 2011 CFR
2011-04-01
...: (i) Whose readers younger than 18 years of age constitute 15 percent or less of the total readership... persons younger than 18 years of age as measured by competent and reliable survey evidence. (b) Labeling and advertising in an audio or video format shall be limited as follows: (1) Audio format shall be...
21 CFR 1140.32 - Format and content requirements for labeling and advertising.
Code of Federal Regulations, 2013 CFR
2013-04-01
...: (i) Whose readers younger than 18 years of age constitute 15 percent or less of the total readership... persons younger than 18 years of age as measured by competent and reliable survey evidence. (b) Labeling and advertising in an audio or video format shall be limited as follows: (1) Audio format shall be...
21 CFR 1140.32 - Format and content requirements for labeling and advertising.
Code of Federal Regulations, 2010 CFR
2010-04-01
...: (i) Whose readers younger than 18 years of age constitute 15 percent or less of the total readership... persons younger than 18 years of age as measured by competent and reliable survey evidence. (b) Labeling and advertising in an audio or video format shall be limited as follows: (1) Audio format shall be...
ERIC Educational Resources Information Center
Ko, Chao-Jung
2012-01-01
This study investigated the possibility that initial-level learners may acquire oral skills through synchronous computer-mediated communication (SCMC). Twelve Taiwanese French as a foreign language (FFL) students, divided into three groups, were required to conduct a variety of tasks in one of the three learning environments (video/audio, audio,…
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-04-01
Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Applicability of Visual Analytics to Defence and Security Operations
2011-06-01
It shows the events importance in the news over time. Topics are extracted from fused video, audio and closed captions. Since viewing video...Detection of Anomalous Maritime Behavior, In Banissi, E. et al. (Eds.) Proceedings of the 12th IEEE International Conference on Information Visualisation
Connors, Erin C; Yazzolino, Lindsay A; Sánchez, Jaime; Merabet, Lotfi B
2013-03-27
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Vision-mediated interaction with the Nottingham caves
NASA Astrophysics Data System (ADS)
Ghali, Ahmed; Bayomi, Sahar; Green, Jonathan; Pridmore, Tony; Benford, Steve
2003-05-01
The English city of Nottingham is widely known for its rich history and compelling folklore. A key attraction is the extensive system of caves to be found beneath Nottingham Castle. Regular guided tours are made of the Nottingham caves, during which castle staff tell stories and explain historical events to small groups of visitors while pointing out relevant cave locations and features. The work reported here is part of a project aimed at enhancing the experience of cave visitors, and providing flexible storytelling tools to their guides, by developing machine vision systems capable of identifying specific actions of guides and/or visitors and triggering audio and/or video presentations as a result. Attention is currently focused on triggering audio material by directing the beam of a standard domestic flashlight towards features of interest on the cave wall. Cameras attached to the walls or roof provide image sequences within which torch light and cave features are detected and their relative positions estimated. When a target feature is illuminated the corresponding audio response is generated. We describe the architecture of the system, its implementation within the caves and the results of initial evaluations carried out with castle guides and members of the public.
How Deep Neural Networks Can Improve Emotion Recognition on Video Data
2016-09-25
HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA Pooya Khorrami1 , Tom Le Paine1, Kevin Brady2, Charlie Dagli2, Thomas S...this work, we present a system that per- forms emotion recognition on video data using both con- volutional neural networks (CNNs) and recurrent...neural net- works (RNNs). We present our findings on videos from the Audio/Visual+Emotion Challenge (AV+EC2015). In our experiments, we analyze the effects
78 FR 76861 - Body-Worn Cameras for Criminal Justice Applications
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-19
..., Various). 3. Maximum Video Resolution of the BWC (e.g., 640x480, 1080p). 4. Recording Speed of the BWC (e... Photos. 7. Whether the BWC embeds a Time/Date Stamp in the recorded video. 8. The Field of View of the...-person video viewing. 12. The Audio Format of the BWC (e.g., MP2, AAC). 13. Whether the BWC contains...
The interactional significance of formulas in autistic language.
Dobbinson, Sushie; Perkins, Mick; Boucher, Jill
2003-01-01
The phenomenon of echolalia in autistic language is well documented. Whilst much early research dismissed echolalia as merely an indicator of cognitive limitation, later work identified particular discourse functions of echolalic utterances. The work reported here extends the study of the interactional significance of echolalia to formulaic utterances. Audio and video recordings of conversations between the first author and two research participants were transcribed and analysed according to a Conversation Analysis framework and a multi-layered linguistic framework. Formulaic language was found to have predictable interactional significance within the language of an individual with autism, and the generic phenomenon of formulaicity in company with predictable discourse function was seen to hold across the research participants, regardless of cognitive ability. The implications of formulaicity in autistic language for acquisition and processing mechanisms are discussed.
[Intermodal timing cues for audio-visual speech recognition].
Hashimoto, Masahiro; Kumashiro, Masaharu
2004-06-01
The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.
On Basic Needs and Modest Media.
ERIC Educational Resources Information Center
Gunter, Jock
1978-01-01
The need for grass-roots participation and local control in whatever technology is used to meet basic educational needs is stressed. Successful uses of the audio cassette recorder and the portable half-inch video recorder are described; the 8-mm sound camera and video player are also suggested as viable "modest" technologies. (JEG)
Factors Affecting Use of Telepresence Technology in a Global Technology Company
ERIC Educational Resources Information Center
Agnor, Robert Joseph
2013-01-01
Telepresence uses the latest video conferencing technology, with high definition video, surround sound audio, and specially constructed studios, to create a near face-to-face meeting experience. A Fortune 500 company which markets information technology has organizations distributed around the globe, and has extensive collaboration needs among…
ERIC Educational Resources Information Center
Arnn, Barbara
2007-01-01
This article discusses how schools across the US are using the latest videoconference and audio/video streaming technologies creatively to move to the next level of their very specific needs. At the Georgia Institute of Technology in Atlanta, the technology that is the backbone of the school's extensive distance learning program has to be…
20 CFR 901.11 - Enrollment procedures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... enrollment cycle. Of the 36 hours, at least 18 must be comprised of core subject matter; the remainder may be... enrollment cycle. (ii) Correspondence or individual study programs (including audio and/or video taped... video tapes, etc. (A) Continuing education credit will be awarded for the creation of materials for...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-24
...] Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video Description... should be the obligation of the apparatus manufacturer, under section 203, to ensure that the devices are... secondary audio stream on all equipment, including older equipment. In the absence of an industry solution...
Distance Learning as a Training and Education Tool.
ERIC Educational Resources Information Center
Hosley, David L.; Randolph, Sherry L.
Lockheed Space Operations Company's Technical Training Department provides certification classes to personnel at other National Aeronautics and Space Administration (NASA) Centers. Courses are delivered over the Kennedy Space Center's Video Teleconferencing System (ViTS). The ViTS system uses two-way compressed video and two-way audio between…
Home Telehealth Video Conferencing: Perceptions and Performance
Morris, Greg; Pech, Joanne; Rechter, Stuart; Carati, Colin; Kidd, Michael R
2015-01-01
Background The Flinders Telehealth in the Home trial (FTH trial), conducted in South Australia, was an action research initiative to test and evaluate the inclusion of telehealth services and broadband access technologies for palliative care patients living in the community and home-based rehabilitation services for the elderly at home. Telehealth services at home were supported by video conferencing between a therapist, nurse or doctor, and a patient using the iPad tablet. Objective The aims of this study are to identify which technical factors influence the quality of video conferencing in the home setting and to assess the impact of these factors on the clinical perceptions and acceptance of video conferencing for health care delivery into the home. Finally, we aim to identify any relationships between technical factors and clinical acceptance of this technology. Methods An action research process developed several quantitative and qualitative procedures during the FTH trial to investigate technology performance and users perceptions of the technology including measurements of signal power, data transmission throughput, objective assessment of user perceptions of videoconference quality, and questionnaires administered to clinical users. Results The effectiveness of telehealth was judged by clinicians as equivalent to or better than a home visit on 192 (71.6%, 192/268) occasions, and clinicians rated the experience of conducting a telehealth session compared with a home visit as equivalent or better in 90.3% (489/540) of the sessions. It was found that the quality of video conferencing when using a third generation mobile data service (3G) in comparison to broadband fiber-based services was concerning as 23.5% (220/936) of the calls failed during the telehealth sessions. The experimental field tests indicated that video conferencing audio and video quality was worse when using mobile data services compared with fiber to the home services. As well, statistically significant associations were found between audio/video quality and patient comfort with the technology as well as the clinician ratings for effectiveness of telehealth. Conclusions These results showed that the quality of video conferencing when using 3G-based mobile data services instead of broadband fiber-based services was less due to failed calls, audio/ video jitter, and video pixilation during the telehealth sessions. Nevertheless, clinicians felt able to deliver effective services to patients at home using 3G-based mobile data services. PMID:26381104
Clay-Williams, Robyn; Baysari, Melissa; Taylor, Natalie; Zalitis, Dianne; Georgiou, Andrew; Robinson, Maureen; Braithwaite, Jeffrey; Westbrook, Johanna
2017-08-14
Telephone consultation and triage services are increasingly being used to deliver health advice. Availability of high speed internet services in remote areas allows healthcare providers to move from telephone to video telehealth services. Current approaches for assessing video services have limitations. This study aimed to identify the challenges for service providers associated with transitioning from audio to video technology. Using a mixed-method, qualitative approach, we observed training of service providers who were required to switch from telephone to video, and conducted pre- and post-training interviews with 15 service providers and their trainers on the challenges associated with transitioning to video. Two full days of simulation training were observed. Data were transcribed and analysed using an inductive approach; a modified constant comparative method was employed to identify common themes. We found three broad categories of issues likely to affect implementation of the video service: social, professional, and technical. Within these categories, eight sub-themes were identified; they were: enhanced delivery of the health service, improved health advice for people living in remote areas, safety concerns, professional risks, poor uptake of video service, system design issues, use of simulation for system testing, and use of simulation for system training. This study identified a number of unexpected potential barriers to successful transition from telephone to the video system. Most prominent were technical and training issues, and personal safety concerns about transitioning from telephone to video media. Addressing identified issues prior to implementation of a new video telehealth system is likely to improve effectiveness and uptake.
Intense Collaboration: Human and Technical Requirements for Agile C2
2009-06-01
following tools: email, instant messaging, video and audio conferencing, and screen sharing. Three questions were posed to begin the focused discussion on...records keywords and concepts that have been discussed and links them back to the audio of the meeting that referred to these things • Need to support... audio , textual forms of communication with a variety of display capabilities (white boards, knowledge walls, etc.) Innovation Team members with
Using a new, free spectrograph program to critically investigate acoustics
NASA Astrophysics Data System (ADS)
Ball, Edward; Ruiz, Michael J.
2016-11-01
We have developed an online spectrograph program with a bank of over 30 audio clips to visualise a variety of sounds. Our audio library includes everyday sounds such as speech, singing, musical instruments, birds, a baby, cat, dog, sirens, a jet, thunder, and screaming. We provide a link to a video of the sound sources superimposed with their respective spectrograms in real time. Readers can use our spectrograph program to view our library, open their own desktop audio files, and use the program in real time with a computer microphone.
Low Latency Audio Video: Potentials for Collaborative Music Making through Distance Learning
ERIC Educational Resources Information Center
Riley, Holly; MacLeod, Rebecca B.; Libera, Matthew
2016-01-01
The primary purpose of this study was to examine the potential of LOw LAtency (LOLA), a low latency audio visual technology designed to allow simultaneous music performance, as a distance learning tool for musical styles in which synchronous playing is an integral aspect of the learning process (e.g., jazz, folk styles). The secondary purpose was…
2002-01-01
speeds that are sufficient to download and play the audio/video content in near real-time. Most users at home are connected via analog modems , which are...significantly slower (28.8K, 56K ). Audio files can take several minutes to load, and the user may experience pauses and buffering. While not ideal
Live Ultra-High Definition from the International Space Station
NASA Technical Reports Server (NTRS)
Grubbs, Rodney; George, Sandy
2017-01-01
The first ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a 'Super Session' at the National Association of Broadcasters (NAB) in April 2017. The Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. HEVC may also enable live Virtual Reality video downlinks from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a 'live' event was staged when the UHD coming from the ISS had a latency of 10+ seconds. Finally, the paper will discuss how NASA is leveraging commercial technologies for use on-orbit vs. creating technology as was required during the Apollo Moon Program and early space age.
Night Vision Goggle Training; Development and Production of Six Video Programs
1992-11-01
SUIUECT TERMS Multimedia Video production iS. NUMBER OF PAGES Aeral photography Night vision Videodisc 18 Image Intensification Night vision goggles...reference tool on the squadron or wing demonstrates NVG field of view, field of level. The programs run approximately ten regard, scan techniques, image...training device modalities. These The production of a videodisc that modalities include didactic and video will serve as an NVG audio-visual database
Achieving perceptually-accurate aural telepresence
NASA Astrophysics Data System (ADS)
Henderson, Paul D.
Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8 degrees for speech and less than 4 degrees with a pink noise burst. The results allow for the density of WFS systems to be selected from the required localization accuracy. Also, by exploiting the ventriloquist effect, the angular resolution of an audio rendering may be reduced when combined with spatially-accurate video.
Rouhani, R; Cronenberger, H; Stein, L; Hannum, W; Reed, A M; Wilhelm, C; Hsiao, H
1995-01-01
This paper describes the design, authoring, and development of interactive, computerized, multimedia clinical simulations in pediatric rheumatology/immunology and related musculoskeletal diseases, the development and implementation of a high speed information management system for their centralized storage and distribution, and analytical methods for evaluating the total system's educational impact on medical students and pediatric residents. An FDDI fiber optic network with client/server/host architecture is the core. The server houses digitized audio, still-image video clips and text files. A host station houses the DB2/2 database containing case-associated labels and information. Cases can be accessed from any workstation via a customized interface in AVA/2 written specifically for this application. OS/2 Presentation Manager controls, written in C, are incorporated into the interface. This interface allows SQL searches and retrievals of cases and case materials. In addition to providing user-directed clinical experiences, this centralized information management system provides designated faculty with the ability to add audio notes and visual pointers to image files. Users may browse through case materials, mark selected ones and download them for utilization in lectures or for editing and converting into 35mm slides.
Emotional climate of a pre-service science teacher education class in Bhutan
NASA Astrophysics Data System (ADS)
Rinchen, Sonam; Ritchie, Stephen M.; Bellocchi, Alberto
2016-09-01
This study explored pre-service secondary science teachers' perceptions of classroom emotional climate in the context of the Bhutanese macro-social policy of Gross National Happiness. Drawing upon sociological perspectives of human emotions and using Interaction Ritual Theory this study investigated how pre-service science teachers may be supported in their professional development. It was a multi-method study involving video and audio recordings of teaching episodes supported by interviews and the researcher's diary. Students also registered their perceptions of the emotional climate of their classroom at 3-minute intervals using audience response technology. In this way, emotional events were identified for video analysis. The findings of this study highlighted that the activities pre-service teachers engaged in matter to them. Positive emotional climate was identified in activities involving students' presentations using video clips and models, coteaching, and interactive whole class discussions. Decreases in emotional climate were identified during formal lectures and when unprepared presenters led presentations. Emotions such as frustration and disappointment characterized classes with negative emotional climate. The enabling conditions to sustain a positive emotional climate are identified. Implications for sustaining macro-social policy about Gross National Happiness are considered in light of the climate that develops in science teacher education classes.
FIRRE command and control station (C2)
NASA Astrophysics Data System (ADS)
Laird, R. T.; Kramer, T. A.; Cruickshanks, J. R.; Curd, K. M.; Thomas, K. M.; Moneyhun, J.
2006-05-01
The Family of Integrated Rapid Response Equipment (FIRRE) is an advanced technology demonstration program intended to develop a family of affordable, scalable, modular, and logistically supportable unmanned systems to meet urgent operational force protection needs and requirements worldwide. The near-term goal is to provide the best available unmanned ground systems to the warfighter in Iraq and Afghanistan. The overarching long-term goal is to develop a fully-integrated, layered force protection system of systems for our forward deployed forces that is networked with the future force C4ISR systems architecture. The intent of the FIRRE program is to reduce manpower requirements, enhance force protection capabilities, and reduce casualties through the use of unmanned systems. FIRRE is sponsored by the Office of the Under Secretary of Defense, Acquisitions, Technology and Logistics (OUSD AT&L), and is managed by the Product Manager, Force Protection Systems (PM-FPS). The FIRRE Command and Control (C2) Station supports two operators, hosts the Joint Battlespace Command and Control Software for Manned and Unmanned Assets (JBC2S), and will be able to host Mission Planning and Rehearsal (MPR) software. The C2 Station consists of an M1152 HMMWV fitted with an S-788 TYPE I shelter. The C2 Station employs five 24" LCD monitors for display of JBC2S software [1], MPR software, and live video feeds from unmanned systems. An audio distribution system allows each operator to select between various audio sources including: AN/PRC-117F tactical radio (SINCGARS compatible), audio prompts from JBC2S software, audio from unmanned systems, audio from other operators, and audio from external sources such as an intercom in an adjacent Tactical Operations Center (TOC). A power distribution system provides battery backup for momentary outages. The Ethernet network, audio distribution system, and audio/video feeds are available for use outside the C2 Station.
Delivering Instruction via Streaming Media: A Higher Education Perspective.
ERIC Educational Resources Information Center
Mortensen, Mark; Schlieve, Paul; Young, Jon
2000-01-01
Describes streaming media, an audio/video presentation that is delivered across a network so that it is viewed while being downloaded onto the user's computer, including a continuous stream of video that can be pre-recorded or live. Discusses its use for nontraditional students in higher education and reports on implementation experiences. (LRW)
Equipment Issues regarding the Collection of Video Data for Research
ERIC Educational Resources Information Center
Kung, Rebecca Lippmann; Kung, Peter; Linder, Cedric
2005-01-01
Physics education research increasingly makes use of video data for analysis of student learning and teaching practice. Collection of these data is conceptually simple but execution is often fraught with costly and time-consuming complications. This pragmatic paper discusses the development of systems to record and permanently archive audio and…
Teaching Shakespeare: Materials and Outcomes for Web-Based Instruction and Class Adjunct.
ERIC Educational Resources Information Center
Schwartz, Helen J.
Multimedia hypertext materials have instructional advantages when used as adjuncts in traditional classes and as the primary means of instruction, as illustrated in this case study of college-level Shakespeare classes. Plays become more accessible through use of audio and video resources, including video clips from play productions. Student work…
ERIC Educational Resources Information Center
Blikstad-Balas, Marte
2017-01-01
Audio- and video-recordings are increasingly popular data sources in contemporary qualitative research, making discussions about methodological implications of such recordings timelier than ever. This article goes beyond discussing practical issues and issues of "camera effect" and reactivity to identify three major challenges of using…
Code of Federal Regulations, 2011 CFR
2011-10-01
... antenna that is: (A) Used to receive video programming services via multipoint distribution services... radio, amateur (“HAM”) radio, Citizen's Band (CB) radio, and Digital Audio Radio Service (DARS) signals... reception of video programming services or devices used to receive or transmit fixed wireless signals shall...
Producing a College Video: The Sweat (and Success) Is in the Details.
ERIC Educational Resources Information Center
Hays, Tim
1994-01-01
Introduces specifics related to production elements and message elements of college videos. Outlines aspects of lighting, audio, narration, backing music, and performance music. Discusses elements of pace, physical plant, people, and programs with regard to marketing. Suggests the goal is to create a unified vision to attract the target audience.…
Students' Perceptions on Using Different Listening Assessment Methods: Audio-Only and Video Media
ERIC Educational Resources Information Center
Sulaiman, Norazean; Muhammad, Ahmad Mazli; Ganapathy, Nurul Nadiah Dewi Faizul; Khairuddin, Zulaikha; Othman, Salwa
2017-01-01
The importance and usefulness of incorporating video media elements to teach listening have become part of the general understanding and commonplace in the academia nowadays (Alonso, 2013; Macwan, 2015; Garcia, 2012). Hence, it is of vital importance that students are taught effectively and assessed accordingly on their listening skills. The…
77 FR 64514 - Sunshine Act Meeting; Open Commission Meeting; Wednesday, October 17, 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-22
.../Video coverage of the meeting will be broadcast live with open captioning over the Internet from the FCC... format and alternative media, including large print/ type; digital disk; and audio and video tape. Best.... 2012-26060 Filed 10-18-12; 4:15 pm] BILLING CODE 6712-01-P ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-20
... Apparatus Requirements for Emergency Information and Video Description: Implementation of the Twenty- First... of apparatus covered by the CVAA to provide access to the secondary audio stream used for audible... availability of accessible equipment and, if so, what those notification requirements should be. The Commission...
ISO-IEC MPEG-2 software video codec
NASA Astrophysics Data System (ADS)
Eckart, Stefan; Fogg, Chad E.
1995-04-01
Part 5 of the International Standard ISO/IEC 13818 `Generic Coding of Moving Pictures and Associated Audio' (MPEG-2) is a Technical Report, a sample software implementation of the procedures in parts 1, 2 and 3 of the standard (systems, video, and audio). This paper focuses on the video software, which gives an example of a fully compliant implementation of the standard and of a good video quality encoder, and serves as a tool for compliance testing. The implementation and some of the development aspects of the codec are described. The encoder is based on Test Model 5 (TM5), one of the best, published, non-proprietary coding models, which was used during MPEG-2 collaborative stage to evaluate proposed algorithms and to verify the syntax. The most important part of the Test Model is controlling the quantization parameter based on the image content and bit rate constraints under both signal-to-noise and psycho-optical aspects. The decoder has been successfully tested for compliance with the MPEG-2 standard, using the ISO/IEC MPEG verification and compliance bitstream test suites as stimuli.
Cross-Modal Approach for Karaoke Artifacts Correction
NASA Astrophysics Data System (ADS)
Yan, Wei-Qi; Kankanhalli, Mohan S.
In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment κ= {κ (t) : κ (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (κ ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U '(t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.
Cross-Modal Approach for Karaoke Artifacts Correction
NASA Astrophysics Data System (ADS)
Yan, Wei-Qi; Kankanhalli, Mohan S.
In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment kappa= {kappa (t) : kappa (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (kappa ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U ' (t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.
ERIC Educational Resources Information Center
Diambra, Henry M.; And Others
VIDAC (Video Audio Compressed), a new technology based upon non-real-time transmission of audiovisual information via conventional television systems, has been invented by the Westinghouse Electric Corporation. This system permits time compression, during storage and transmission of the audio component of a still visual-narrative audio…
Challenges of Using Audio-Visual Aids as Warm-Up Activity in Teaching Aviation English
ERIC Educational Resources Information Center
Sahin, Mehmet; Sule, St.; Seçer, Y. E.
2016-01-01
This study aims to find out the challenges encountered in the use of video as audio-visual material as a warm-up activity in aviation English course at high school level. This study is based on a qualitative study in which focus group interview is used as the data collection procedure. The participants of focus group are four instructors teaching…
Oh, Pok-Ja; Kim, Il-Ok; Shin, Sung-Rae; Jung, Hoe-Kyung
2004-10-01
This study was to develop Web-based multimedia content for Physical Examination and Health Assessment. The multimedia content was developed based on Jung's teaching and learning structure plan model, using the following 5 processes : 1) Analysis Stage, 2) Planning Stage, 3) Storyboard Framing and Production Stage, 4) Program Operation Stage, and 5) Final Evaluation Stage. The web based multimedia content consisted of an intro movie, main page and sub pages. On the main page, there were 6 menu bars that consisted of Announcement center, Information of professors, Lecture guide, Cyber lecture, Q&A, and Data centers, and a site map which introduced 15 week lectures. In the operation of web based multimedia content, HTML, JavaScript, Flash, and multimedia technology (Audio and Video) were utilized and the content consisted of text content, interactive content, animation, and audio & video. Consultation with the experts in context, computer engineering, and educational technology was utilized in the development of these processes. Web-based multimedia content is expected to offer individualized and tailored learning opportunities to maximize and facilitate the effectiveness of the teaching and learning process. Therefore, multimedia content should be utilized concurrently with the lecture in the Physical Examination and Health Assessment classes as a vital teaching aid to make up for the weakness of the face-to- face teaching-learning method.
NASA Astrophysics Data System (ADS)
Vest, K. F.; Jones, P.; French, D.; Sachs, H.; Clements, F.
1985-11-01
This report discusses the design of a two-node, color Video-Teleconferencing System for the System for the U.S. Navy and its installation at sites in Suitland, Maryland, and Pearl Harbor. It details the development of the audio, video, and fast-facsimile parts of the system; integration of the system into the communications network; design of a teleconference room; and installation of the system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... broadcast stations, digital broadcast stations, analog cable systems, digital cable systems, wireline video systems, wireless cable systems, Direct Broadcast Satellite (DBS) services, Satellite Digital Audio Radio...
NASA Astrophysics Data System (ADS)
Palmer, Kristin Cartwright
This study examined opportunities for participation and learning for "struggling" readers in a sixth grade science classroom. Literacy practices, language differences, activity structures, and the social and cultural identities and associated practices and everyday funds of knowledge of both "struggling" and nonstruggling readers in one sixth grade science classroom were documented and analyzed using a qualitative research design. Over sixteen hours of audio and video recordings as well as numerous student work samples were transcribed and analyzed. Analyses of the classroom interactions and artifacts documented in this study revealed several important affordances available in the context of this classroom related to opportunities for speaking and listening, some uses of print texts, and student agency in interactions. Student learning was found to be constrained by macrocontextual factors, text difficulty, and student history.
User-oriented summary extraction for soccer video based on multimodal analysis
NASA Astrophysics Data System (ADS)
Liu, Huayong; Jiang, Shanshan; He, Tingting
2011-11-01
An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.
The power of digital audio in interactive instruction: An unexploited medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, J.; Trainor, M.
1989-01-01
Widespread use of audio in computer-based training (CBT) occurred with the advent of the interactive videodisc technology. This paper discusses the alternative of digital audio, which, unlike videodisc audio, enables one to rapidly revise the audio used in the CBT and which may be used in nonvideo CBT applications as well. We also discuss techniques used in audio script writing, editing, and production. Results from evaluations indicate a high degree of user satisfaction. 4 refs.
The construction of power in family medicine bedside teaching: a video observation study.
Rees, Charlotte E; Ajjawi, Rola; Monrouxe, Lynn V
2013-02-01
Bedside teaching is essential for helping students develop skills, reasoning and professionalism, and involves the learning triad of student, patient and clinical teacher. Although current rhetoric espouses the sharing of power, the medical workplace is imbued with power asymmetries. Power is context-specific and although previous research has explored some elements of the enactment and resistance of power within bedside teaching, this exploration has been conducted within hospital rather than general practice settings. Furthermore, previous research has employed audio-recorded rather than video-recorded observation and has therefore focused on language and para-language at the expense of non-verbal communication and human-material interaction. A qualitative design was adopted employing video- and audio-recorded observations of seven bedside teaching encounters (BTEs), followed by short individual interviews with students, patients and clinical teachers. Thematic and discourse analyses of BTEs were conducted. Power is constructed by students, patients and clinical teachers throughout different BTE activities through the use of linguistic, para-linguistic and non-verbal communication. In terms of language, participants construct power through the use of questions, orders, advice, pronouns and medical/health belief talk. With reference to para-language, participants construct power through the use of interruption and laughter. In terms of non-verbal communication, participants construct power through physical positioning and the possession or control of medical materials such as the stethoscope. Using this paper as a trigger for discussion, we encourage students and clinical teachers to reflect critically on how their verbal and non-verbal communication constructs power in bedside teaching. Students and clinical teachers need to develop their awareness of what power is, how it can be constructed and shared, and what it means for the student-patient-doctor relationship within bedside teaching. © Blackwell Publishing Ltd 2013.
A functional video-based anthropometric measuring system
NASA Technical Reports Server (NTRS)
Nixon, J. H.; Cater, J. P.
1982-01-01
A high-speed anthropometric three dimensional measurement system using the Selcom Selspot motion tracking instrument for visual data acquisition is discussed. A three-dimensional scanning system was created which collects video, audio, and performance data on a single standard video cassette recorder. Recording rates of 1 megabit per second for periods of up to two hours are possible with the system design. A high-speed off-the-shelf motion analysis system for collecting optical information as used. The video recording adapter (VRA) is interfaced to the Selspot data acquisition system.
ERIC Educational Resources Information Center
Purifoy, George R., Jr.
This report presents a detailed description of the methods by which airborne video recording will be utilized in training Air Force pilots, and presents the format for an experiment testing the effectiveness of such training. Portable airborne recording with ground playback permits more economical and efficient teaching of the critical visual and…
ERIC Educational Resources Information Center
Ajjawi, Rola; Rees, Charlotte; Monrouxe, Lynn V.
2015-01-01
Purpose: This paper aims to explore how opportunities for learning clinical skills are negotiated within bedside teaching encounters (BTEs). Bedside teaching, within the medical workplace, is considered essential for helping students develop their clinical skills. Design/methodology/approach: An audio and/or video observational study examining…
Making the Decision to Provide Enhanced Podcasts to Post-Secondary Science Students
ERIC Educational Resources Information Center
Holbrook, Jane; Dupont, Christine
2011-01-01
Providing students with supplementary course materials such as audio podcasts, enhanced podcasts, video podcasts and other forms of lecture-capture video files after a lecture is now a common occurrence in many post-secondary courses. We used an online questionnaire to ask students how helpful enhanced podcasts were for a variety of course…
Vocabulary Learning through Viewing Video: The Effect of Two Enhancement Techniques
ERIC Educational Resources Information Center
Montero Perez, Maribel; Peters, Elke; Desmet, Piet
2018-01-01
While most studies on L2 vocabulary learning through input have addressed learners' vocabulary uptake from written text, this study focuses on audio-visual input. In particular, we investigate the effects of enhancing video by (1) adding different types of L2 subtitling (i.e. no captioning, full captioning, keyword captioning, and glossed keyword…
Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard
2013-01-01
Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.
Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard
2013-01-01
Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704
Gorresen, Paulo Marcos; Cryan, Paul; Montoya-Aiona, Kristina; Bonaccorso, Frank
2017-01-01
Bats vocalize during flight as part of the sensory modality called echolocation, but very little is known about whether flying bats consistently call. Occasional vocal silence during flight when bats approach prey or conspecifics has been documented for relatively few species and situations. Bats flying alone in clutter-free airspace are not known to forgo vocalization, yet prior observations suggested possible silent behavior in certain, unexpected situations. Determining when, why, and where silent behavior occurs in bats will help evaluate major assumptions of a primary monitoring method for bats used in ecological research, management, and conservation. In this study, we recorded flight activity of Hawaiian hoary bats (Lasiurus cinereus semotus) under seminatural conditions using both thermal video cameras and acoustic detectors. Simultaneous video and audio recordings from 20 nights of observation at 10 sites were analyzed for correspondence between detection methods, with a focus on video observations in three distance categories for which accompanying vocalizations were detected. Comparison of video and audio detections revealed that a high proportion of Hawaiian hoary bats “seen” on video were not simultaneously “heard.” On average, only about one in three visual detections within a night had an accompanying call detection, but this varied greatly among nights. Bats flying on curved flight paths and individuals nearer the cameras were more likely to be detected by both methods. Feeding and social calls were detected, but no clear pattern emerged from the small number of observations involving closely interacting bats. These results may indicate that flying Hawaiian hoary bats often forgo echolocation, or do not always vocalize in a way that is detectable with common sampling and monitoring methods. Possible reasons for the low correspondence between visual and acoustic detections range from methodological to biological and include a number of biases associated with the propagation and detection of sound, cryptic foraging strategies, or conspecific presence. Silent flight behavior may be more prevalent in echolocating bats than previously appreciated, has profound implications for ecological research, and deserves further characterization and study.
Engineering a Live UHD Program from the International Space Station
NASA Technical Reports Server (NTRS)
Grubbs, Rodney; George, Sandy
2017-01-01
The first-ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a “Super Session” at the National Association of Broadcasters (NAB) Show in April 2017. Ultra-High Definition is four times the resolution of “full HD” or “1080P” video. Also referred to as “4K”, the Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. A similar demonstration was conducted in 2006 with the Discovery Channel to demonstrate the ability to stream HDTV from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a “live” event was staged when the UHD video coming from the ISS had a latency of 10+ seconds. In addition, the paper will touch on the unique collaboration between the inherently governmental aspects of the ISS, commercial partners Amazon and Elemental, and the National Association of Broadcasters.
NASA Astrophysics Data System (ADS)
Liu, Lei
The dissertation aims to achieve two goals. First, it attempts to establish a new theoretical framework---the collaborative scientific conceptual change model, which explicitly attends to social factor and epistemic practices of science, to understand conceptual change. Second, it report the findings of a classroom study to investigate how to apply this theoretical framework to examine the trajectories of collaborative scientific conceptual change in a CSCL environment and provide pedagogical implications. Two simulations were designed to help students make connections between the macroscopic substances and the aperceptual microscopic entities and underlying processes. The reported study was focused on analyzing the aggregated data from all participants and the video and audio data from twenty focal groups' collaborative activities and the process of their conceptual development in two classroom settings. Mixed quantitative and qualitative analyses were applied to analyze the video/audio data. The results found that, overall participants showed significant improvements from pretest to posttest on system understanding. Group and teacher effect as well as group variability were detected in both students' posttest performance and their collaborative activities, and variability emerged in group interaction. Multiple data analyses found that attributes of collaborative discourse and epistemic practices made a difference in student learning. Generating warranted claims in discourse as well as the predicting, coordinating theory-evidence, and modifying knowledge in epistemic practices had an impact on student's conceptual understanding. However, modifying knowledge was found negatively related to students' learning effect. The case studies show how groups differed in using the computer tools as a medium to conduct collaborative discourse and epistemic practices. Only with certain combination of discourse features and epistemic practices can the group interaction lead to successful convergent understanding. The results of the study imply that the collaborative scientific conceptual change model is an effective framework to study conceptual change and the simulation environment may mediate the development of successful collaborative interactions (including collaborative discourse and epistemic practices) that lead to collaborative scientific conceptual change.
Xiao, Y; MacKenzie, C; Orasanu, J; Spencer, R; Rahman, A; Gunawardane, V
1999-01-01
To determine what information sources are used during a remote diagnosis task. Experienced trauma care providers viewed segments of videotaped initial trauma patient resuscitation and airway management. Experiment 1 collected responses from anesthesiologists to probing questions during and after the presentation of recorded video materials. Experiment 2 collected the responses from three types of care providers (anesthesiologists, nurses, and surgeons). Written and verbal responses were scored according to detection of critical events in video materials and categorized according to their content. Experiment 3 collected visual scanning data using an eyetracker during the viewing of recorded video materials from the three types of care providers. Eye-gaze data were analyzed in terms of focus on various parts of the videotaped materials. Care providers were found to be unable to detect several critical events. The three groups of subjects studied (anesthesiologists, nurses, and surgeons) focused on different aspects of videotaped materials. When the remote events and activities are multidisciplinary and rapidly changing, experts linked with audio-video-data connections may encounter difficulties in comprehending remote activities, and their information usage may be biased. Special training is needed for the remote decision-maker to appreciate tasks outside his or her speciality and beyond the boundaries of traditional divisions of labor.
Recording and reading of information on optical disks
NASA Astrophysics Data System (ADS)
Bouwhuis, G.; Braat, J. J. M.
In the storage of information, related to video programs, in a spiral track on a disk, difficulties arise because the bandwidth for video is much greater than for audio signals. An attractive solution was found in optical storage. The optical noncontact method is free of wear, and allows for fast random access. Initial problems regarding a suitable light source could be overcome with the aid of appropriate laser devices. The basic concepts of optical storage on disks are treated insofar as they are relevant for the optical arrangement. A general description is provided of a video, a digital audio, and a data storage system. Scanning spot microscopy for recording and reading of optical disks is discussed, giving attention to recording of the signal, the readout of optical disks, the readout of digitally encoded signals, and cross talk. Tracking systems are also considered, taking into account the generation of error signals for radial tracking and the generation of focus error signals.
Maximizing ship-to-shore connections via telepresence technologies
NASA Astrophysics Data System (ADS)
Fundis, A. T.; Kelley, D. S.; Proskurowski, G.; Delaney, J. R.
2012-12-01
Live connections to offshore oceanographic research via telepresence technologies enable onshore scientists, students, and the public to observe and participate in active research as it is happening. As part of the ongoing construction effort of the NSF's Ocean Observatories Initiative's cabled network, the VISIONS'12 expedition included a wide breadth of activities to allow the public, students, and scientists to interact with a sea-going expedition. Here we describe our successes and lessons learned in engaging these onshore audiences through the various outreach efforts employed during the expedition including: 1) live high-resolution video and audio streams from the seafloor and ship; 2) live connections to science centers, aquaria, movie theaters, and undergraduate classrooms; 3) social media interactions; and 4) an onboard immersion experience for undergraduate and graduate students.
NASA Astrophysics Data System (ADS)
Newman, R. L.
2002-12-01
How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.
High performance MPEG-audio decoder IC
NASA Technical Reports Server (NTRS)
Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.
1993-01-01
The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.
Audio-video decision support for patients: the documentary genré as a basis for decision aids.
Volandes, Angelo E; Barry, Michael J; Wood, Fiona; Elwyn, Glyn
2013-09-01
Decision support tools are increasingly using audio-visual materials. However, disagreement exists about the use of audio-visual materials as they may be subjective and biased. This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio-visual materials. Three concerns arising from documentary film studies as they apply to the use of audio-visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio-visual materials (selection bias) and how to ensure objectivity (editorial bias). Decision science needs to start a debate about how audio-visual materials are to be used in decision support tools. Simply because audio-visual materials may be subjective and open to bias does not mean that we should not use them. Methods need to be found to ensure consensus around balance and editorial control, such that audio-visual materials can be used. © 2011 John Wiley & Sons Ltd.
A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet
ERIC Educational Resources Information Center
Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.
2006-01-01
Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…
ERIC Educational Resources Information Center
Mirriahi, Negin; Jovanovic, Jelena; Dawson, Shane; Gaševic, Dragan; Pardo, Abelardo
2018-01-01
The rapid growth of blended and online learning models in higher education has resulted in a parallel increase in the use of audio-visual resources among students and teachers. Despite the heavy adoption of video resources, there have been few studies investigating their effect on learning processes and even less so in the context of academic…
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
ERIC Educational Resources Information Center
Wei, Ming
2011-01-01
This study investigated the use of discourse markers (DMs) by college learners of English in China. It compared the use of DMs for four discourse functions by students at different proficiency levels. An audio-video instrument called Video Oral Communication Instrument was conducted to elicit ratable speech samples. Fraser's (1999) taxonomy was…
Students' Aesthetic Experiences of Playing Exergames: A Practical Epistemology Analysis of Learning
ERIC Educational Resources Information Center
Maivorsdotter, Ninitha; Quennerstedt, Mikael; Öhman, Marie
2015-01-01
The aim of this study was to explore Swedish junior high school students meaning-making of participating in exergaming in school based on their aesthetic judgments during game play. A transactional approach, drawing on the work of John Dewey, was used in the study and the data consisted of video- and audio recordings of ongoing video gaming. A…
The Moving Image in Education Research: Reassembling the Body in Classroom Video Data
ERIC Educational Resources Information Center
de Freitas, Elizabeth
2016-01-01
While audio recordings and observation might have dominated past decades of classroom research, video data is now the dominant form of data in the field. Ubiquitous videography is standard practice today in archiving the body of both the teacher and the student, and vast amounts of classroom and experiment clips are stored in online archives. Yet…
Investigating Young Children's Talk about the Media
ERIC Educational Resources Information Center
Grace, Donna J.; Henward, Allison S.
2013-01-01
This study was an investigation into the ways in which two classes of six- and seven-year-old children in Hawaii talked about the media. The children were shown video clips from a variety of media and asked to respond both orally and in writing. The qualitative data gathered in this study were researcher notes, video and audio-taped focus group…
Twenty-Five Years of Dynamic Growth.
ERIC Educational Resources Information Center
Pipes, Lana
1980-01-01
Discusses developments in instructional technology in the past 25 years in the areas of audio, video, micro-electronics, social evolution, the space race, and living with rapidly changing technology. (CMV)
Recording Technologies: Sights & Sounds. Resources in Technology.
ERIC Educational Resources Information Center
Deal, Walter F., III
1994-01-01
Provides information on recording technologies such as laser disks, audio and videotape, and video cameras. Presents a design brief that includes objectives, student outcomes, and a student quiz. (JOW)
Chelyabinsk: Portrait of an asteroid airburst
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kring, David A.; Boslough, Mark
Video and audio from hundreds of smartphones and dashboard cameras combined with seismic, acoustic, and satellite measurements provide the first precise documentation of a 10 000-ton asteroid explosion.
Next Generation Integrated Environment for Collaborative Work Across Internets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey B. Newman
2009-02-24
We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less
YouTube as a patient-information source for root canal treatment.
Nason, K; Donnelly, A; Duncan, H F
2016-12-01
To assess the content and completeness of Youtube ™ as an information source for patients undergoing root canal treatment procedures. YouTube ™ (https://www.youtube.com/) was searched for information using three relevant treatment search terms ('endodontics', 'root canal' and 'root canal treatment'). After exclusions (language, no audio, >15 min, duplicates), 20 videos per search term were selected. General video assessment included duration, ownership, views, age, likes/dislikes, target audience and video/audio quality, whilst content was analysed under six categories ('aetiology', 'anatomy', 'symptoms', 'procedure', 'postoperative course' and 'prognosis'). Content was scored for completeness level and statistically analysed using anova and post hoc Tukey's test (P < 0.05). To obtain 60 acceptable videos, 124 were assessed. Depending on the search term employed, the video content and ownership differed markedly. There was wide variation in both the number of video views and 'likes/dislikes'. The average video age was 788 days. In total, 46% of videos were 'posted' by a dentist/specialist source; however, this was search term specific rising to 70% of uploads for the search 'endodontic', whilst laypersons contributed 18% of uploads for the search 'root canal treatment'. Every video lacked content in the designated six categories, although 'procedure' details were covered more frequently and in better detail than other categories. Videos posted by dental professional (P = 0.046) and commercial sources (P = 0.009) were significantly more complete than videos posted by laypeople. YouTube ™ videos for endodontic search terms varied significantly by source and content and were generally incomplete. The danger of patient reliance on YouTube ™ is highlighted, as is the need for endodontic professionals to play an active role in directing patients towards alternative high-quality information sources. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Code of Federal Regulations, 2011 CFR
2011-01-01
... as that term is defined in Section 4 of the Stevenson-Wydler Technology Innovation Act of 1980, as..., software, audio/video production, technology application assessment generated pursuant to Section 11(c) of...
Ikkatai, Yuko; Okanoya, Kazuo; Seki, Yoshimasa
2016-07-01
Humans communicate with one another not only face-to-face but also via modern telecommunication methods such as television and video conferencing. We readily detect the difference between people actively communicating with us and people merely acting via a broadcasting system. We developed an animal model of this novel communication method seen in humans to determine whether animals also make this distinction. We built a system for two animals to interact via audio-visual equipment in real-time, to compare behavioral differences between two conditions, an "interactive two-way condition" and a "non-interactive (one-way) condition." We measured birds' responses to stimuli which appeared in these two conditions. We used budgerigars, which are small, gregarious birds, and found that the frequency of vocal interaction with other individuals did not differ between the two conditions. However, body synchrony between the two birds was observed more often in the interactive condition, suggesting budgerigars recognized the difference between these interactive and non-interactive conditions on some level. Copyright © 2016 Elsevier B.V. All rights reserved.
Perceiving referential intent: Dynamics of reference in natural parent-child interactions
Trueswell, John C.; Lin, Yi; Armstrong, Benjamin; Cartmill, Erica A.; Goldin-Meadow, Susan; Gleitman, Lila R.
2016-01-01
Two studies are presented which examined the temporal dynamics of the social-attentive behaviors that co-occur with referent identification during natural parent-child interactions in the home. Study 1 focused on 6.2 hours of videos of 56 parents interacting during everyday activities with their 14–18 month-olds, during which parents uttered common nouns as parts of spontaneously occurring utterances. Trained coders recorded, on a second-by-second basis, parent and child attentional behaviors relevant to reference in the period (40 sec.) immediately surrounding parental naming. The referential transparency of each interaction was independently assessed by having naïve adult participants guess what word the parent had uttered in these video segments, but with the audio turned off, forcing them to use only non-linguistic evidence available in the ongoing stream of events. We found a great deal of ambiguity in the input along with a few potent moments of word-referent transparency; these transparent moments have a particular temporal signature with respect to parent and child attentive behavior: it was the object’s appearance and/or the fact that it captured parent/child attention at the moment the word was uttered, not the presence of the object throughout the video, that predicted observers’ accuracy. Study 2 experimentally investigated the precision of the timing relation, and whether it has an effect on observer accuracy, by disrupting the timing between when the word was uttered and the behaviors present in the videos as they were originally recorded. Disrupting timing by only +/− 1 to 2 sec. reduced participant confidence and significantly decreased their accuracy in word identification. The results enhance an expanding literature on how dyadic attentional factors can influence early vocabulary growth. By hypothesis, this kind of time-sensitive data-selection process operates as a filter on input, removing many extraneous and ill-supported word-meaning hypotheses from consideration during children’s early vocabulary learning. PMID:26775159
... because it was fifth in a list of historical classifications of common skin rash illnesses in children. ... Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file ...
Code of Federal Regulations, 2012 CFR
2012-07-01
... scores, stock ticker information, extended program associated data, video and photographic images, and... digital audio radio services as defined in 17 U.S.C. 114(j)(10). Term means the period commencing January...
Code of Federal Regulations, 2011 CFR
2011-07-01
... scores, stock ticker information, extended program associated data, video and photographic images, and... digital audio radio services as defined in 17 U.S.C. 114(j)(10). Term means the period commencing January...
Visual communication and the content and style of conversation.
Rutter, D R; Stephenson, G M; Dewey, M E
1981-02-01
Previous research suggests that visual communication plays a number of important roles in social interaction. In particular, it appears to influence the content of what people say in discussions, the style of their speech, and the outcomes they reach. However, the findings are based exclusively on comparisons between face-to-face conversations and audio conversations, in which subjects sit in separate rooms and speak over a microphone-headphone intercom which precludes visual communication. Interpretation is difficult, because visual communication is confounded with physical presence, which itself makes available certain cues denied to audio subjects. The purpose of this paper is to report two experiments in which the variables were separated and content and style were re-examined. The first made use of blind subjects, and again compared the face-to-face and audio conditions. The second returned to sighted subjects, and examined four experimental conditions: face-to-face; audio; a curtain condition in which subjects sat in the same room but without visual communication; and a video condition in which they sat in separate rooms and communicated over a television link. Neither visual communication nor physical presence proved to be critical variable. Instead, the two sources of cues combined, such that content and style were influenced by the aggregate of available cues. The more cueless the settings, the more task-oriented, depersonalized and unspontaneous the conversation. The findings also suggested that the primary effect of cuelessness is to influence verbal content, and that its influence on both style and outcome occurs indirectly, through the mediation of content.
Low-cost mm-wave Doppler/FMCW transceivers for ground surveillance applications
NASA Astrophysics Data System (ADS)
Hansen, H. J.; Lindop, R. W.; Majstorovic, D.
2005-12-01
A 35 GHz Doppler CW/FMCW transceiver (Equivalent Radiated Power ERP=30dBm) has been assembled and its operation described. Both instantaneous beat signals (relating to range in FMCW mode) and Doppler signals (relating to targets moving at ~1.5 ms -1) exhibit audio frequencies. Consequently, the radar processing is provided by laptop PC using its inbuilt video-audio media system with appropriate MathWorks software. The implications of radar-on-chip developments are addressed.
Electrophysiological evidence for Audio-visuo-lingual speech integration.
Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc
2018-01-31
Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Calandra, Brendan; Brantley-Dias, Laurie; Yerby, Johnathan; Demir, Kadir
2018-01-01
A group of preservice science teachers edited video footage of their practice teaching to identify and isolate critical incidents. They then wrote guided reflection papers on those critical incidents using different forms of media prompts while they wrote. The authors used a counterbalanced research design to compare the quality of writing that…
1989-11-27
drive against pornography, and it has also achieved new breakthroughs and progress in eradicating porno - graphic materials in certain localities...September, more than 45,000 law enforcement personnel in the province made more than 5,900 inspections of bookstores and audio and video shops and stalls...on 3 October. Second, the sources of Shishi City’s illegal and pornographic video - tapes have been ascertained. Third, the channels through which
Reference Model for Project Support Environments Version 1.0
1993-02-28
relationship with the framework’s Process Support services and with the Lifecycle Process Engineering services. Examples: "* ORCA (Object-based...Design services. Examples: "* ORCA (Object-based Requirements Capture and Analysis). "* RETRAC (REquirements TRACeability). 4.3 Life-Cycle Process...34traditional" computer tools. Operations: Examples of audio and video processing operations include: "* Create, modify, and delete sound and video data
The Uses of Media in Early Childhood Education
ERIC Educational Resources Information Center
Grossman, Bruce
1976-01-01
This article discusses the educational benefits of involving young children in the media arts and presents suggestions for using still cameras, movie cameras, audio tape recorders, and video tape recorders. (SB)
... and Team Healthcare Providers Prevention Information and Advice Posters for the Athletic Community General MRSA Information and ... site? Adobe PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple ...
Audio-visual aid in teaching "fatty liver".
Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha
2016-05-06
Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various concepts of the topic, while keeping in view Mayer's and Ellaway guidelines for multimedia presentation. A pre-post test study on subject knowledge was conducted for 100 students with the video shown as intervention. A retrospective pre study was conducted as a survey which inquired about students understanding of the key concepts of the topic and a feedback on our video was taken. Students performed significantly better in the post test (mean score 8.52 vs. 5.45 in pre-test), positively responded in the retrospective pre-test and gave a positive feedback for our video presentation. Well-designed multimedia tools can aid in cognitive processing and enhance working memory capacity as shown in our study. In times when "smart" device penetration is high, information and communication tools in medical education, which can act as essential aid and not as replacement for traditional curriculums, can be beneficial to the students. © 2015 by The International Union of Biochemistry and Molecular Biology, 44:241-245, 2016. © 2015 The International Union of Biochemistry and Molecular Biology.
Code of Federal Regulations, 2011 CFR
2011-07-01
... exceeding these size requirements (a “bulky specimen”), the Office will create a digital facsimile of the... bulky specimen. (3) In the absence of non-bulky alternatives, the Office may accept an audio or video...
Code of Federal Regulations, 2013 CFR
2013-07-01
... associated data, video and photographic images, and such other telematics and/or data services as may exist... the preexisting satellite digital audio radio services as defined in 17 U.S.C. 114(j)(10). Term means...
Code of Federal Regulations, 2014 CFR
2014-07-01
... associated data, video and photographic images, and such other telematics and/or data services as may exist... the preexisting satellite digital audio radio services as defined in 17 U.S.C. 114(j)(10). Term means...
Fiber optic multiplex optical transmission system
NASA Technical Reports Server (NTRS)
Bell, C. H. (Inventor)
1977-01-01
A multiplex optical transmission system which minimizes external interference while simultaneously receiving and transmitting video, digital data, and audio signals is described. Signals are received into subgroup mixers for blocking into respective frequency ranges. The outputs of these mixers are in turn fed to a master mixer which produces a composite electrical signal. An optical transmitter connected to the master mixer converts the composite signal into an optical signal and transmits it over a fiber optic cable to an optical receiver which receives the signal and converts it back to a composite electrical signal. A de-multiplexer is coupled to the output of the receiver for separating the composite signal back into composite video, digital data, and audio signals. A programmable optic patch board is interposed in the fiber optic cables for selectively connecting the optical signals to various receivers and transmitters.
Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig
2016-01-01
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data. PMID:27891074
Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig
2016-01-01
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.
The development and preliminary effectiveness of a nursing case management e-learning program.
Liu, Wen-I; Chu, Kuo-Chung; Chen, Shing-Chia
2014-07-01
The purpose of this article was to describe the development and preliminary effectiveness of a digital case management education program. The e-learning program was built through the collaboration of a nurse educator and an informatics professor. The program was then developed according to the following steps: (1) building a visual interface, (2) scripting each unit, (3) preparing the course material and assessment tests, (4) using teaching software to record audio and video courses, (5) editing the audio recordings, (6) using instructional media or hyperlinks to finalize the interactions, (7) creating the assessment and obtaining feedback, and (8) testing the overall operation. The digital program consisted of five learning modules, self-assessment questions, learning cases, sharing experiences, and learning resources. Forty nurses participated in this study and fully completed the questionnaires both before and after the program. The knowledge and confidence levels in the experimental group were significantly higher over time than those of the comparison group. The results supported the use of educational technology to provide a more flexible and effective presentation method for continuing education programs.
Streetwise sales and the social order of city streets.
Llewellyn, Nick; Burrow, Robin
2008-09-01
This paper analyses how a Big Issue vendor approached passers-by and how they responded, how recognizable courses of social and economic activity were interactionally produced from initiation through to some conclusion. The paper recovers how the vendor's work was contextually embedded in the urban landscape, how it was constrained by, and actively shaped, the social order of the street. Drawing on video-audio recordings the paper contributes to a growing body of ethnographic and ethnomethodological research which has emphasized the embodied, contingent and interactional character of economic activity. By examining such materials, the paper is well positioned to describe how the vendor found his market on the street, social interventions that propelled passers-by into buying behaviour. The paper sheds light on now familiar encounters which occur millions of times each week in the UK and beyond.
"What Are You Viewing?" Exploring the Pervasive Social TV Experience
NASA Astrophysics Data System (ADS)
Schatz, Raimund; Baillie, Lynne; Fröhlich, Peter; Egger, Sebastian; Grechenig, Thomas
The vision of pervasive TV foresees users engaging with interactive video services across a variety of contexts and user interfaces. Following this idea, this chapter extends traditional Social TV toward the notion of pervasive Social TV (PSTV) by including mobile viewing scenarios. We discuss social interaction enablers that integrate TV content consumption and communication in the context of two case studies that evaluate Social TV on mobile smartphones as well as the traditional set-top-box-based setup. We report on the impact of social features such as text-chat, audio-chat, and synchronized channel-choice on the end-user's media experience. By analyzing the commonalities and the differences between mobile and living-room Social TV that we found, we provide guidance on the design of pervasive Social TV systems as well as on future research issues.
Teaching assistant-student interactions in a modified SCALE-UP classroom
NASA Astrophysics Data System (ADS)
DeBeck, George; Demaree, Dedra
2012-02-01
In the spring term of 2010, Oregon State University (OSU) began using a SCALE-UP style classroom in the instruction of the introductory calculus-based physics series. Instruction in this classroom was conducted in weekly two-hour sessions facilitated by the primary professor and either two graduate teaching assistants (GTAs) or a graduate teaching assistant and an undergraduate learning assistant (LA). During the course of instruction, two of the eight tables in the room were audio and video recorded. We examine the practices of the GTAs in interacting with the students through both qualitative and quantitative analyses of these recordings. Quantitatively, significant differences are seen between the most experienced GTA and the rest. A major difference in confidence is also observed in the qualitative analysis of this GTA compared to a less experienced GTA.
Stochastic Packet Loss Model to Evaluate QoE Impairments
NASA Astrophysics Data System (ADS)
Hohlfeld, Oliver
With provisioning of broadband access for mass market—even in wireless and mobile networks—multimedia content, especially real-time streaming of high-quality audio and video, is extensively viewed and exchanged over the Internet. Quality of Experience (QoE) aspects, describing the service quality perceived by the user, is a vital factor in ensuring customer satisfaction in today's communication networks. Frameworks for accessing quality degradations in streamed video currently are investigated as a complex multi-layered research topic, involving network traffic load, codec functions and measures of user perception of video quality.
2009-06-01
visualisation tool. These tools are currently in use at the Surveillance and Control Training Unit (SACTU) in Williamtown, New South Wales, and the School...itself by facilitating the brevity and sharpness of learning points. The playback of video and audio was considered an extremely useful method of...The task assessor’s comments were supported by wall projections and audio replays of relevant mission segments that were controlled by an AAR
The Feasibility and Acceptability of Google Glass for Teletoxicology Consults.
Chai, Peter R; Babu, Kavita M; Boyer, Edward W
2015-09-01
Teletoxicology offers the potential for toxicologists to assist in providing medical care at remote locations, via remote, interactive augmented audiovisual technology. This study examined the feasibility of using Google Glass, a head-mounted device that incorporates a webcam, viewing prism, and wireless connectivity, to assess the poisoned patient by a medical toxicology consult staff. Emergency medicine residents (resident toxicology consultants) rotating on the toxicology service wore Glass during bedside evaluation of poisoned patients; Glass transmitted real-time video of patients' physical examination findings to toxicology fellows and attendings (supervisory consultants), who reviewed these findings. We evaluated the usability (e.g., quality of connectivity and video feeds) of Glass by supervisory consultants, as well as attitudes towards use of Glass. Resident toxicology consultants and supervisory consultants completed 18 consults through Glass. Toxicologists viewing the video stream found the quality of audio and visual transmission usable in 89 % of cases. Toxicologists reported their management of the patient changed after viewing the patient through Glass in 56 % of cases. Based on findings obtained through Glass, toxicologists recommended specific antidotes in six cases. Head-mounted devices like Google Glass may be effective tools for real-time teletoxicology consultation.
Improving the Capture and Re-Use of Data with Wearable Computers
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Fating, Curtis C.; Green, Daniel; Powers, Edward I. (Technical Monitor)
2001-01-01
At the Goddard Space Flight Center, members of the Real-Time Software Engineering Branch are developing a wearable, wireless, voice-activated computer for use in a wide range of crosscutting space applications that would benefit from having instant Internet, network, and computer access with complete mobility and hands-free operations. These applications can be applied across many fields and disciplines including spacecraft fabrication, integration and testing (including environmental testing), and astronaut on-orbit control and monitoring of experiments with ground based experimenters. To satisfy the needs of NASA customers, this wearable computer needs to be connected to a wireless network, to transmit and receive real-time video over the network, and to receive updated documents via the Internet or NASA servers. The voice-activated computer, with a unique vocabulary, will allow the users to access documentation in a hands free environment and interact in real-time with remote users. We will discuss wearable computer development, hardware and software issues, wireless network limitations, video/audio solutions and difficulties in language development.
YouTube as an information source for pediatric adenotonsillectomy and ear tube surgery.
Sorensen, Jeffrey A; Pusz, Max D; Brietzke, Scott E
2014-01-01
Assess the overall quality of information on adenotonsillectomy and ear tube surgery presented on YouTube (www.youtube.com) from the perspective of a parent or patient searching for information on surgery. The YouTube website was systematically searched on select dates with a formal search strategy to identify videos pertaining to pediatric adenotonsillectomy and ear tube surgery. Only videos with at least 5 (ear tube surgery) or 10 (adenotonsillectomy) views per day were included. Each video was viewed and scored by two independent scorers. Videos were categorized by goal and scored for video/audio quality, accuracy, comprehensiveness, and procedure-specific content. Cross-sectional study. Public domain website. Fifty-five videos were scored for adenotonsillectomy and forty-seven for ear tube surgery. The most common category was educational (65.3%) followed by testimonial (28.4%), and news program (9.8%). Testimonials were more common for adenotonsillectomy than ear tube surgery (41.8% vs. 12.8%, p=0.001). Testimonials had a significantly lower mean accuracy (2.23 vs. 2.62, p=0.02), comprehensiveness (1.71 vs. 2.22, p=0.007), and TA specific content (0.64 vs. 1.69, p=0.001) score than educational type videos. Only six videos (5.9%) received high scores in both video/audio quality and accuracy/comprehensiveness of content. There was no significant association between the accuracy and comprehensive score and views, posted "likes", posted "dislikes", and likes/dislikes ratio. There was an association between "likes" and mean video quality (Spearman's rho=0.262, p=0.008). Parents/patients searching YouTube for information on pediatric adenotonsillectomy and ear tube surgery will generally encounter low quality information with testimonials being common but of significantly lower quality. Viewer perceived quality ("likes") did not correlate to formally scored content quality. Published by Elsevier Ireland Ltd.
Audio in Courseware: Design Knowledge Issues.
ERIC Educational Resources Information Center
Aarntzen, Diana
1993-01-01
Considers issues that need to be addressed when incorporating audio in courseware design. Topics discussed include functions of audio in courseware; the relationship between auditive and visual information; learner characteristics in relation to audio; events of instruction; and audio characteristics, including interactivity and speech technology.…
Application of a robot for critical care rounding in small rural hospitals.
Murray, Cindy; Ortiz, Elizabeth; Kubin, Cay
2014-12-01
The purpose of this article is to present an option for a model of care that allows small rural hospitals to be able to provide specialty physicians for critical care patient needs in lieu of on-site critical care physician coverage. A real-time, 2-way audio and video remote presence robot is used to bring a specialist to the bedside to interact with patients. This article discusses improvements in quality and finance outcomes as well as care team and patient satisfaction associated with this model. Discussion also includes expansion of the care model to the emergency department for acute stroke care. Copyright © 2014 Elsevier Inc. All rights reserved.
Quo vadimus? The 21st Century and multimedia
NASA Technical Reports Server (NTRS)
Kuhn, Allan D.
1991-01-01
The concept is related of computer driven multimedia to the NASA Scientific and Technical Information Program (STIP). Multimedia is defined here as computer integration and output of text, animation, audio, video, and graphics. Multimedia is the stage of computer based information that allows access to experience. The concepts are also drawn in of hypermedia, intermedia, interactive multimedia, hypertext, imaging, cyberspace, and virtual reality. Examples of these technology developments are given for NASA, private industry, and academia. Examples of concurrent technology developments and implementations are given to show how these technologies, along with multimedia, have put us at the threshold of the 21st century. The STI Program sees multimedia as an opportunity for revolutionizing the way STI is managed.
ERIC Educational Resources Information Center
Culver, Patti; Culbert, Angie; McEntyre, Judy; Clifton, Patrick; Herring, Donna F.; Notar, Charles E.
2009-01-01
The article is about the collaboration between two classrooms that enabled a second grade class to participate in a high school biology class. Through the use of modern video conferencing equipment, Mrs. Culbert, with the help of the Dalton State College Educational Technology Training Center (ETTC), set up a live, two way video and audio feed of…
Atomization of metal (Materials Preparation Center)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-01-01
Atomization of metal requires high pressure gas and specialized chambers for cooling and collecting the powders without contamination. The critical step for morphological control is the impingement of the gas on the melt stream. The video is a color video of a liquid metal stream being atomized by high pressure gas. This material was cast at the Ames Laboratory's Materials Preparation Center http://www.mpc.ameslab.gov WARNING - AUDIO IS LOUD.
ERIC Educational Resources Information Center
Dupuis, Josee; Coutu, Josee; Laneuville, Odette
2013-01-01
In higher education, many of the new teaching interventions are introduced in the format of audio-visual files distributed through the Internet. A pedagogical tool consisting of questions listed as learning objectives and answers presented using online videos was designed as a supplement for a molecular biology course and made available to a large…
Watermarking 3D Objects for Verification
1999-01-01
signal (audio/ image /video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...quality images , and digital video. The field of digital watermarking is relatively new, and many of its terms have not been well defined. Among the dif...ferent media types, watermarking of 2D still images is comparatively better studied. Inherently, digital water- marking of 3D objects remains a
Ebola (Ebola Virus Disease): Treatment
... Guidance for Cleaning, Disinfection, and Waste Disposal in Commercial Passenger Aircraft Notes on the Interim U.S. Guidance for Monitoring and Movement of Persons with Potential Ebola Virus Exposure Communication Resources Videos Audio Infographics & Illustrations Factsheets Posters Virus ...
Ebola (Ebola Virus Disease): Prevention
... Guidance for Cleaning, Disinfection, and Waste Disposal in Commercial Passenger Aircraft Notes on the Interim U.S. Guidance for Monitoring and Movement of Persons with Potential Ebola Virus Exposure Communication Resources Videos Audio Infographics & Illustrations Factsheets Posters Virus ...
... Guidance for Cleaning, Disinfection, and Waste Disposal in Commercial Passenger Aircraft Notes on the Interim U.S. Guidance for Monitoring and Movement of Persons with Potential Ebola Virus Exposure Communication Resources Videos Audio Infographics & Illustrations Factsheets Posters Virus ...
Ebola (Ebola Virus Disease): Transmission
... Guidance for Cleaning, Disinfection, and Waste Disposal in Commercial Passenger Aircraft Notes on the Interim U.S. Guidance for Monitoring and Movement of Persons with Potential Ebola Virus Exposure Communication Resources Videos Audio Infographics & Illustrations Factsheets Posters Virus ...
Ebola (Ebola Virus Disease): Diagnosis
... Guidance for Cleaning, Disinfection, and Waste Disposal in Commercial Passenger Aircraft Notes on the Interim U.S. Guidance for Monitoring and Movement of Persons with Potential Ebola Virus Exposure Communication Resources Videos Audio Infographics & Illustrations Factsheets Posters Virus ...
Digital Documentation: Using Computers to Create Multimedia Reports.
ERIC Educational Resources Information Center
Speitel, Tom; And Others
1996-01-01
Describes methods for creating integrated multimedia documents using recent advances in print, audio, and video digitization that bring added usefulness to computers as data acquisition, processing, and presentation tools. Discusses advantages of digital documentation. (JRH)
Smoking and Tobacco Use Health Effects
... Reports Vital Signs Surgeon General’s Reports 2016 2014 Historical Reports 2012 2010 2006 2004 2001 A Brief ... Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file ...
ERIC Educational Resources Information Center
Web Feet K-8, 2001
2001-01-01
This annotated subject guide to Web sites and additional resources focuses on biomes. Specifies age levels for resources that include Web sites, CD-ROMs and software, videos, books, audios, and magazines; includes professional resources; and presents a relevant class activity. (LRW)
77 FR 4321 - Sunshine Act Meeting; Open Commission Meeting; January 31, 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-27
... Americans while minimizing the universal service contribution burden, including by eliminating waste, fraud... print/ type; digital disk; and audio and video tape. Best Copy and Printing, Inc. may be reached by...
NASA Astrophysics Data System (ADS)
Radhakrishnan, Regunathan; Divakaran, Ajay; Xiong, Ziyou; Otsuka, Isao
2006-12-01
We propose a content-adaptive analysis and representation framework to discover events using audio features from "unscripted" multimedia such as sports and surveillance for summarization. The proposed analysis framework performs an inlier/outlier-based temporal segmentation of the content. It is motivated by the observation that "interesting" events in unscripted multimedia occur sparsely in a background of usual or "uninteresting" events. We treat the sequence of low/mid-level features extracted from the audio as a time series and identify subsequences that are outliers. The outlier detection is based on eigenvector analysis of the affinity matrix constructed from statistical models estimated from the subsequences of the time series. We define the confidence measure on each of the detected outliers as the probability that it is an outlier. Then, we establish a relationship between the parameters of the proposed framework and the confidence measure. Furthermore, we use the confidence measure to rank the detected outliers in terms of their departures from the background process. Our experimental results with sequences of low- and mid-level audio features extracted from sports video show that "highlight" events can be extracted effectively as outliers from a background process using the proposed framework. We proceed to show the effectiveness of the proposed framework in bringing out suspicious events from surveillance videos without any a priori knowledge. We show that such temporal segmentation into background and outliers, along with the ranking based on the departure from the background, can be used to generate content summaries of any desired length. Finally, we also show that the proposed framework can be used to systematically select "key audio classes" that are indicative of events of interest in the chosen domain.
Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.
Kanaya, Shoko; Yokosawa, Kazuhiko
2011-02-01
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.
Breaking the news on mobile TV: user requirements of a popular mobile content
NASA Astrophysics Data System (ADS)
Knoche, Hendrik O.; Sasse, M. Angela
2006-02-01
This paper presents the results from three lab-based studies that investigated different ways of delivering Mobile TV News by measuring user responses to different encoding bitrates, image resolutions and text quality. All studies were carried out with participants watching News content on mobile devices, with a total of 216 participants rating the acceptability of the viewing experience. Study 1 compared the acceptability of a 15-second video clip at different video and audio encoding bit rates on a 3G phone at a resolution of 176x144 and an iPAQ PDA (240x180). Study 2 measured the acceptability of video quality of full feature news clips of 2.5 minutes which were recorded from broadcast TV, encoded at resolutions ranging from 120x90 to 240x180, and combined with different encoding bit rates and audio qualities presented on an iPAQ. Study 3 improved the legibility of the text included in the video simulating a separate text delivery. The acceptability of News' video quality was greatly reduced at a resolution of 120x90. The legibility of text was a decisive factor in the participants' assessment of the video quality. Resolutions of 168x126 and higher were substantially more acceptable when they were accompanied by optimized high quality text compared to proportionally scaled inline text. When accompanied by high quality text TV news clips were acceptable to the vast majority of participants at resolutions as small as 168x126 for video encoding bitrates of 160kbps and higher. Service designers and operators can apply this knowledge to design a cost-effective mobile TV experience.
Allavena, Rachel E; Schaffer-White, Andrea B; Long, Hanna; Alawneh, John I
The goal of the study was to evaluate alternative student-centered approaches that could replace autopsy sessions and live demonstration and to explore refinements in assessment procedures for standardized cardiac dissection. Simulators and videos were identified as feasible, economical, student-centered teaching methods for technical skills training in medical contexts, and a direct comparison was undertaken. A low-fidelity anatomically correct simulator approximately the size of a horse's heart with embedded dissection pathways was constructed and used with a series of laminated photographs of standardized cardiac dissection. A video of a standardized cardiac dissection of a normal horse's heart was recorded and presented with audio commentary. Students were allowed to nominate a preference for learning method, and students who indicated no preference were randomly allocated to keep group numbers even. Objective performance data from an objective structure assessment criterion and student perception data on confidence and competency from surveys showed both innovations were similarly effective. Evaluator reflections as well as usage logs to track patterns of student use were both recorded. A strong selection preference was identified for kinesthetic learners choosing the simulator and visual learners choosing the video. Students in the video cohort were better at articulating the reasons for dissection procedures and sequence due to the audio commentary, and student satisfaction was higher with the video. The major conclusion of this study was that both methods are effective tools for technical skills training, but consideration should be given to the preferred learning style of adult learners to maximize educational outcomes.
Nuvvula, S; Alahari, S; Kamatham, R; Challa, R R
2015-02-01
To determine the effect of three-dimensional (3D) audiovisual (AV) distraction in reducing dental anxiety of children. A randomised clinical trial with a parallel design carried out on 90 children (49 boys and 41 girls) aged between 7 and 10 years (mean age of 8.4 years) to ascertain the comparative efficacy of audio (music) and AV (3D video glasses) distraction in reducing the dental anxiety of children during local analgesia (LA) administration. Ninety children were randomly divided into three groups; control (basic behaviour guidance techniques without distraction), audio (basic techniques plus music) and AV (basic techniques plus 3D AV) distraction groups. All the children experienced LA administration with/without distraction and the anxiety was assessed using a combination of measures: MCDAS(f) (self-report), pulse rate (physiological), behaviour (using Wright's modification of Frankl behaviour rating scale and Houpt scale) and preferences of children. All 90 children completed the study. A highly significant reduction in the anxiety of audiovisual group as reported by the MCDAS(f) values (p<0.001) and Houpt scale (p=0.003); whereas pulse rate showed statistically significant increase (p<0.001) in all the three groups irrespective of distraction. The child preferences also affirmed the usage of 3D video glasses. LA administration with music or 3D video glasses distraction had an added advantage in a majority of children with 3D video glasses being superior to music. High levels of satisfaction from children who experienced treatment with 3D video glasses were also observed.
Post Game Analysis: Using Video-Based Coaching for Continuous Professional Development
Hu, Yue-Yung; Peyre, Sarah E.; Arriaga, Alexander F.; Osteen, Robert T.; Corso, Katherine A.; Weiser, Thomas G.; Swanson, Richard S.; Ashley, Stanley W.; Raut, Chandrajit P.; Zinner, Michael J.; Gawande, Atul A.; Greenberg, Caprice C.
2011-01-01
Background The surgical learning curve persists for years after training, yet existing CME efforts targeting this are limited. We describe a pilot study of a scalable video-based intervention, providing individualized feedback on intra-operative performance. Study Design Four complex operations performed by surgeons of varying experience – a chief resident accompanied by the operating senior surgeon, a surgeon with <10 years in practice, another with 20–30 years, and a surgeon with >30 years of experience – were video-recorded. Video playback formed the basis of 1-hour coaching sessions with a peer-judged surgical expert. These sessions were audio-recorded, transcribed, and thematically coded. Results The sessions focused on operative technique, both technical aspects and decision-making. With increasing seniority, more discussion was devoted to the optimization of teaching and facilitation of the resident’s technical performance. Coaching sessions with senior surgeons were peer-to-peer interactions, with each discussing his preferred approach. The coach alternated between directing the session (asking probing questions) and responding to specific questions brought by the surgeons, depending on learning style. At all experience levels, video review proved valuable in identifying episodes of failure-to-progress and troubleshooting alternative approaches. All agreed this tool is a powerful one. Inclusion of trainees seems most appropriate when coaching senior surgeons; it may restrict the dialogue of more junior attendings. Conclusions Video-based coaching is an educational modality that targets intra-operative judgment, technique, and teaching. Surgeons of all levels found it highly instructive. This may provide a practical, much needed approach for continuous professional development. PMID:22192924
Ebola (Ebola Virus Disease): Signs and Symptoms
... Guidance for Cleaning, Disinfection, and Waste Disposal in Commercial Passenger Aircraft Notes on the Interim U.S. Guidance for Monitoring and Movement of Persons with Potential Ebola Virus Exposure Communication Resources Videos Audio Infographics & Illustrations Factsheets Posters Virus ...
... County-level Lyme disease data from 2000-2016 Microsoft Excel file [Excel CSV – 209KB] ––Right–click the link ... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer ...
What Are the Symptoms of Vaginitis?
... Videos Get to Know NICHD Podcasts and Audio Social Media Join NICHD Listservs About NICHD Organization Office of the Director Director's Corner Office of Administrative Management (OAM) Office of Communications (OC) Office of Global Health (OGH) Office of ...
"Tuberculosis Case Management" Training.
ERIC Educational Resources Information Center
Knebel, Elisa; Kolodner, Jennifer
2001-01-01
The need to isolated health providers with critical knowledge in tuberculosis (TB) case management prompted the development of "Tuberculosis Case Management" CD-ROM. Features include "Learning Center,""Examination Room," and "Library." The combination of audio, video, and graphics allows participants to…
Code of Federal Regulations, 2010 CFR
2010-10-01
... broadcast stations, digital broadcast stations, analog cable systems, digital cable systems, wireline video systems, wireless cable systems, Direct Broadcast Satellite (DBS) services, Satellite Digital Audio Radio... local government, or their designated representatives, with a means of emergency communication with the...
ERIC Educational Resources Information Center
Web Feet K-8, 2001
2001-01-01
This annotated subject guide to Web sites and additional resources focuses on mythology. Specific age levels are given for resources that include Web sites, CD-ROMs and software, videos, books, audios, and magazines; offers professional resources; and presents a relevant class activity. (LRW)
ERIC Educational Resources Information Center
Web Feet K-8, 2001
2001-01-01
This annotated subject guide to Web sites and additional resources focuses on space and astronomy. Specifies age levels for resources that include Web sites, CD-ROMS and software, videos, books, audios, and magazines; offers professional resources; and presents a relevant class activity. (LRW)
Software tools for developing an acoustics multimedia CD-ROM
NASA Astrophysics Data System (ADS)
Bigelow, Todd W.; Wheeler, Paul A.
2003-10-01
A multimedia CD-ROM was developed to accompany the textbook, Science of Sound, by Tom Rossing. This paper discusses the multimedia elements included in the CD-ROM and the various software packages used to create them. PowerPoint presentations with an audio-track background were converted to web pages using Impatica. Animations of acoustic examples and quizzes were developed using Flash by Macromedia. Vegas Video and Sound Forge by Sonic Foundry were used for editing video and audio clips while Cleaner by Discreet was used to compress the clips for use over the internet. Math tutorials were presented as whiteboard presentations using Hitachis Starboard to create the graphics and TechSmiths Camtasia Studio to record the presentations. The CD-ROM is in a web-page format created with Macromedias Dreamweaver. All of these elements are integrated into a single course supplement that can be viewed by any computer with a web browser.
Kelley, Frances J; Klopf, Maria Ignacia
2008-10-01
To describe the Clinical Communication Program developed to integrate second language learning (L2), multimedia, Web-based technologies, and the Internet in an advanced practice nursing education program. Electronic recording devices as well as audio, video editing, Web design, and programming software were used as tools for developing L2 scenarios for practice in clinical settings. The Clinical Communication Program offers opportunities to support both students and faculty members to develop their linguistic and cultural competence skills to serve better their patients, in general, and their students who speak a language other than English, in particular. The program provided 24 h on-demand access for using audio, video, and text exercises via the Internet. L2 education for healthcare providers includes linguistic (listening, speaking, reading, and writing) experiences as well as cultural competence and practices inside and outside the classroom environment as well as online and offline the Internet realm.
Blaettler, M; Bruegger, A; Forster, I C; Lehareinger, Y
1988-03-01
The design of an analog interface to a digital audio signal processor (DASP)-video cassette recorder (VCR) system is described. The complete system represents a low-cost alternative to both FM instrumentation tape recorders and multi-channel chart recorders. The interface or DASP input-output unit described in this paper enables the recording and playback of up to 12 analog channels with a maximum of 12 bit resolution and a bandwidth of 2 kHz per channel. Internal control and timing in the recording component of the interface is performed using ROMs which can be reprogrammed to suit different analog-to-digital converter hardware. Improvement in the bandwidth specifications is possible by connecting channels in parallel. A parallel 16 bit data output port is provided for direct transfer of the digitized data to a computer.
Wireless augmented reality communication system
NASA Technical Reports Server (NTRS)
Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)
2006-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)
2014-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)
2016-01-01
The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.
Digital Watermarking: From Concepts to Real-Time Video Applications
1999-01-01
includes still- image , video, audio, and geometry data among others-the fundamental con- cept of steganography can be transferred from the field of...size of the message, which should be as small as possible. Some commercially available algorithms for image watermarking forego the secure-watermarking... image compres- sion.’ The image’s luminance component is divided into 8 x 8 pixel blocks. The algorithm selects a sequence of blocks and applies the
CONARC Soft Skills Training Conference.
1973-04-05
videocassette) Script of video tape: (Audio portion only) USAMPS Presents DYNAMICS OF HUMAN BEHAVIOR EGO DEFENSE MECHANISMS V-98 Ib I I SCENE fI Mr...prepared for distribution on request to CONARC Training Aids Agency, Fort Eustis, Virginia 23604. In order to secure said presentation a 60 minute video ...potential critical situations with which a driver may have to cope . In order to identify the specific purposes and situations which constitute a given job
Combinatorial Markov Random Fields and Their Applications to Information Organization
2008-02-01
titles, part-of- speech tags; • Image processing: images, colors, texture, blobs, interest points, caption words; • Video processing: video signal, audio...McGurk and MacDonald published their pioneering work [80] that revealed the multi-modal nature of speech perception: sound and moving lips compose one... Speech (POS) n-grams (that correspond to the syntactic structure of text). POS n-grams are extracted from sentences in an incremental manner: the first n
Audio-vocal interaction in single neurons of the monkey ventrolateral prefrontal cortex.
Hage, Steffen R; Nieder, Andreas
2015-05-06
Complex audio-vocal integration systems depend on a strong interconnection between the auditory and the vocal motor system. To gain cognitive control over audio-vocal interaction during vocal motor control, the PFC needs to be involved. Neurons in the ventrolateral PFC (VLPFC) have been shown to separately encode the sensory perceptions and motor production of vocalizations. It is unknown, however, whether single neurons in the PFC reflect audio-vocal interactions. We therefore recorded single-unit activity in the VLPFC of rhesus monkeys (Macaca mulatta) while they produced vocalizations on command or passively listened to monkey calls. We found that 12% of randomly selected neurons in VLPFC modulated their discharge rate in response to acoustic stimulation with species-specific calls. Almost three-fourths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of vocalization. Based on these audio-vocal interactions, the VLPFC might be well positioned to combine higher order auditory processing with cognitive control of the vocal motor output. Such audio-vocal integration processes in the VLPFC might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2015 the authors 0270-6474/15/357030-11$15.00/0.
Context-specific effects of musical expertise on audiovisual integration
Bishop, Laura; Goebl, Werner
2014-01-01
Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819
Handschu, René; Littmann, Rebekka; Reulbach, Udo; Gaul, Charly; Heckmann, Josef G; Neundörfer, Bernhard; Scibor, Mateusz
2003-12-01
In acute stroke care, rapid but careful evaluation of patients is mandatory but requires an experienced stroke neurologist. Telemedicine offers the possibility of bringing such expertise quickly to more patients. This study tested for the first time whether remote video examination is feasible and reliable when applied in emergency stroke care using the National Institutes of Health Stroke Scale (NIHSS). We used a novel multimedia telesupport system for transfer of real-time video sequences and audio data. The remote examiner could direct the set-top camera and zoom from distant overviews to close-ups from the personal computer in his office. Acute stroke patients admitted to our stroke unit were examined on admission in the emergency room. Standardized examination was performed by use of the NIHSS (German version) via telemedicine and compared with bedside application. In this pilot study, 41 patients were examined. Total examination time was 11.4 minutes on average (range, 8 to 18 minutes). None of the examinations had to be stopped or interrupted for technical reasons, although minor problems (brightness, audio quality) with influence on the examination process occurred in 2 sessions. Unweighted kappa coefficients ranged from 0.44 to 0.89; weighted kappa coefficients, from 0.85 to 0.99. Remote examination of acute stroke patients with a computer-based telesupport system is feasible and reliable when applied in the emergency room; interrater agreement was good to excellent in all items. For more widespread use, some problems that emerge from details like brightness, optimal camera position, and audio quality should be solved.
Two-way digital communications
NASA Astrophysics Data System (ADS)
Glenn, William E.; Daly, Ed
1996-03-01
The communications industry has been rapidly converting from analog to digital communications for audio, video, and data. The initial applications have been concentrating on point-to-multipoint transmission. Currently, a new revolution is occurring in which two-way point-to-point transmission is a rapidly growing market. The system designs for video compression developed for point-to-multipoint transmission are unsuitable for this new market as well as for satellite based video encoding. A new system developed by the Space Communications Technology Center has been designed to address both of these newer applications. An update on the system performance and design will be given.
Simple video format for mobile applications
NASA Astrophysics Data System (ADS)
Smith, John R.; Miao, Zhourong; Li, Chung-Sheng
2000-04-01
With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.
Everyday bat vocalizations contain information about emitter, addressee, context, and behavior
Prat, Yosef; Taub, Mor; Yovel, Yossi
2016-01-01
Animal vocal communication is often diverse and structured. Yet, the information concealed in animal vocalizations remains elusive. Several studies have shown that animal calls convey information about their emitter and the context. Often, these studies focus on specific types of calls, as it is rarely possible to probe an entire vocal repertoire at once. In this study, we continuously monitored Egyptian fruit bats for months, recording audio and video around-the-clock. We analyzed almost 15,000 vocalizations, which accompanied the everyday interactions of the bats, and were all directed toward specific individuals, rather than broadcast. We found that bat vocalizations carry ample information about the identity of the emitter, the context of the call, the behavioral response to the call, and even the call’s addressee. Our results underline the importance of studying the mundane, pairwise, directed, vocal interactions of animals. PMID:28005079
The GuideView System for Interactive, Structured, Multi-modal Delivery of Clinical Guidelines
NASA Technical Reports Server (NTRS)
Iyengar, Sriram; Florez-Arango, Jose; Garcia, Carlos Andres
2009-01-01
GuideView is a computerized clinical guideline system which delivers clinical guidelines in an easy-to-understand and easy-to-use package. It may potentially enhance the quality of medical care or allow non-medical personnel to provide acceptable levels of care in situations where physicians or nurses may not be available. Such a system can be very valuable during space flight missions when a physician is not readily available, or perhaps the designated medical personnel is unable to provide care. Complex clinical guidelines are broken into simple steps. At each step clinical information is presented in multiple modes, including voice,audio, text, pictures, and video. Users can respond via mouse clicks or via voice navigation. GuideView can also interact with medical sensors using wireless or wired connections. The system's interface is illustrated and the results of a usability study are presented.
Kushniruk, Andre W; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L
2002-01-01
This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems.
Bibliography of Citizenship Materials
ERIC Educational Resources Information Center
CASAS - Comprehensive Adult Student Assessment Systems (NJ1), 2008
2008-01-01
The 2008 CASAS "Bibliography of Citizenship Materials" lists available instructional resources for citizenship education. It focuses on materials appropriate for preparing people for the naturalization process and the standardized citizenship examination. Resources include textbooks, audio materials, software and Videos/DVDs. The bibliography also…
42 CFR 482.13 - Condition of participation: Patient's rights.
Code of Federal Regulations, 2011 CFR
2011-10-01
... renewed in accordance with the following limits for up to a total of 24 hours: (A) 4 hours for adults 18..., trained staff member; or (ii) By trained staff using both video and audio equipment. This monitoring must...
42 CFR 482.13 - Condition of participation: Patient's rights.
Code of Federal Regulations, 2010 CFR
2010-10-01
... renewed in accordance with the following limits for up to a total of 24 hours: (A) 4 hours for adults 18..., trained staff member; or (ii) By trained staff using both video and audio equipment. This monitoring must...
42 CFR 482.13 - Condition of participation: Patient's rights.
Code of Federal Regulations, 2013 CFR
2013-10-01
... renewed in accordance with the following limits for up to a total of 24 hours: (A) 4 hours for adults 18..., trained staff member; or (ii) By trained staff using both video and audio equipment. This monitoring must...
75 FR 25185 - Broadband Initiatives Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-07
..., excluding desktop or laptop computers, computer hardware and software (including anti-virus, anti-spyware, and other security software), audio or video equipment, computer network components... 10 desktop or laptop computers and individual workstations to be located within the rural library...
Enabling technology for human collaboration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Tim Andrew; Jones, Wendell Bruce; Warner, David Jay
2003-11-01
This report summarizes the results of a five-month LDRD late start project which explored the potential of enabling technology to improve the performance of small groups. The purpose was to investigate and develop new methods to assist groups working in high consequence, high stress, ambiguous and time critical situations, especially those for which it is impractical to adequately train or prepare. A testbed was constructed for exploratory analysis of a small group engaged in tasks with high cognitive and communication performance requirements. The system consisted of five computer stations, four with special devices equipped to collect physiologic, somatic, audio andmore » video data. Test subjects were recruited and engaged in a cooperative video game. Each team member was provided with a sensor array for physiologic and somatic data collection while playing the video game. We explored the potential for real-time signal analysis to provide information that enables emergent and desirable group behavior and improved task performance. The data collected in this study included audio, video, game scores, physiological, somatic, keystroke, and mouse movement data. The use of self-organizing maps (SOMs) was explored to search for emergent trends in the physiological data as it correlated with the video, audio and game scores. This exploration resulted in the development of two approaches for analysis, to be used concurrently, an individual SOM and a group SOM. The individual SOM was trained using the unique data of each person, and was used to monitor the effectiveness and stress level of each member of the group. The group SOM was trained using the data of the entire group, and was used to monitor the group effectiveness and dynamics. Results suggested that both types of SOMs were required to adequately track evolutions and shifts in group effectiveness. Four subjects were used in the data collection and development of these tools. This report documents a proof of concept study, and its observations are preliminary. Its main purpose is to demonstrate the potential for the tools developed here to improve the effectiveness of groups, and to suggest possible hypotheses for future exploration.« less
Brief Interventions for Tobacco Users: Using the Internet to Train Healthcare Providers
Carpenter, Kelly M.; Cohn, Leslie G.; Glynn, Lisa H.; Stoner, Susan A.
2011-01-01
One fifth of Americans smoke; many have no plans to quit. Motivational Interviewing (MI) is an effective approach to intervention with precontemplative smokers, yet a substantial number of healthcare practitioners lack training in this approach. Two interactive online tutorials were developed to teach practitioners to deliver brief tobacco cessation interventions grounded in the MI approach. The tutorials emphasized the unique aspects of working with precontemplative smokers, incorporating audio and video examples of best practices, interactive exercises, targeted feedback, and practice opportunities. One hundred and fifty-two healthcare providers-in-training were randomly assigned to use the online tutorials or to read training material that was matched for content. A virtual standardized patient evaluation was given before and after the training. Both groups improved their scores from pre- to posttest; however, the tutorial group scored significantly better than the reading group at posttest. The results of this study demonstrate the promise of interactive online tutorials as an efficient and effective way to deliver clinical education. PMID:22096413
Illustrating Geology With Customized Video in Introductory Geoscience Courses
NASA Astrophysics Data System (ADS)
Magloughlin, J. F.
2008-12-01
For the past several years, I have been creating short videos for use in large-enrollment introductory physical geology classes. The motivation for this project included 1) lack of appropriate depth in existing videos, 2) engagement of non-science students, 3) student indifference to traditional textbooks, 4) a desire to share the visual splendor of geology through virtual field trips, and 5) a desire to meld photography, animation, narration, and videography in self-contained experiences. These (HD) videos are information-intensive but short, allowing a focus on relatively narrow topics from numerous subdisciplines, incorporation into lectures to help create variety while minimally interrupting flow and holding students' attention, and manageable file sizes. Nearly all involve one or more field locations, including sites throughout the western and central continental U.S., as well as Hawaii, Italy, New Zealand, and Scotland. The limited scope of the project and motivations mentioned preclude a comprehensive treatment of geology. Instead, videos address geologic processes, locations, features, and interactions with humans. The videos have been made available via DVD and on-line streaming. Such a project requires an array of video and audio equipment and software, a broad knowledge of geology, very good computing power, adequate time, creativity, a substantial travel budget, liability insurance, elucidation of the separation (or non-separation) between such a project and other responsibilities, and, preferably but not essentially, the support of one's supervisor or academic unit. Involving students in such projects entails risks, but involving necessary technical expertise is virtually unavoidable. In my own courses, some videos are used in class and/or made available on-line as simply another aspect of the educational experience. Student response has been overwhelmingly positive, particularly when expectations of students regarding the content of the videos is made clear, and appropriate materials accompany the videos. Retention of primary concepts presented within videos is at least as high as ordinary lecture material, and student questions reference the videos more than any other matter. Use of the videos has created more variety in the course, a better connection to real world geology, and a more palatable experience for students who increasingly describe themselves as visual learners.
Robust media processing on programmable power-constrained systems
NASA Astrophysics Data System (ADS)
McVeigh, Jeff
2005-03-01
To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.
Machine-assisted editing of user-generated content
NASA Astrophysics Data System (ADS)
Cremer, Markus; Cook, Randall
2009-02-01
Over recent years user-generated content has become ubiquitously available and an attractive entertainment source for millions of end-users. Particularly for larger events, where many people use their devices to capture the action, a great number of short video clips are made available through appropriate web services. The objective of this presentation is to describe a way to combine these clips by analyzing them, and automatically reconstruct the time line in which the individual video clips were captured. This will enable people to easily create a compelling multimedia experience by leveraging multiple clips taken by different users from different angles, and across different time spans. The user will be able to shift into the role of a movie director mastering a multi-camera recording of the event. To achieve this goal, the audio portion of the video clips is analyzed, and waveform characteristics are computed with high temporal granularity in order to facilitate precise time alignment and overlap computation of the user-generated clips. Special care has to be given not only to the robustness of the selected audio features against ambient noise and various distortions, but also to the matching algorithm used to align the user-generated clips properly.
NASA Technical Reports Server (NTRS)
1975-01-01
A descriptive handbook for the CTE splitter (RCA part No. 8673734-503) was presented. This unit is designed to extract time data from an interleaved video audio signal. It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.
NASA Technical Reports Server (NTRS)
1975-01-01
A descriptive handbook for the CTE splitter (RCA part No. 8673734-50A) was presented. This unit is designed to extract time data from an interleaved video/audio signal. It is a rack mounting unit 7 inches high, 19 inches wide, 20 inches deep, mounted on slides for retracting from the rack, and weighs approximately 40 pounds. The following information is provided: installation, operation, principles of operation, maintenance, schematics and parts lists.
Flux of Kilogram-sized Meteoroids from Lunar Impact Monitoring. Supplemental Movies
NASA Technical Reports Server (NTRS)
Suggs, Robert; Cooke, William; Suggs, Ron; McNamara, Heather; Swift, Wesley; Moser, Danielle; Diekmann, Anne
2008-01-01
These videos, and audio accompany the slide presentation "Flux of Kilogram-sized Meteoroids from Lunar Impact Monitoring." The slide presentation reviews the routine lunar impact monitoring that has harvested over 110 impacts in 2 years of observations using telescopes and low-light level video cameras. The night side of the lunar surface provides a large collecting area for detecting these impacts and allows estimation of the flux of meteoroids down to a limiting luminous energy.
Voxel-based Immersive Environments Immersive Environments
2000-05-31
3D accelerated hardware. While this method lends itself well to modem hardware, the quality of the resulting images was low due to the coarse sampling...pipes. We will use MPEG video compression when sending video over T1 line, whereas for 56K bit Internet connection, we can use one of the more...sent over the communication line. The ultimate goal is to send the immersive environment over the 56K bps Internet. Since we need to send audio and
Strategies for Transporting Data Between Classified and Unclassified Networks
2016-03-01
datagram protocol (UDP) must be used. The UDP is typically used when speed is a higher priority than data integrity, such as in music or video streaming ...and the exit point of data are separate and can be tightly controlled. This does effectively prevent the comingling of data and is used in industry to...perform functions such as streaming video and audio from secure to insecure networks (ref. 1). A second disadvantage lies in the fact that the
Testing the Theory of Electronic Propinquity: Organizational Teleconferencing.
ERIC Educational Resources Information Center
Korzenny, Felipe; Bauer, Connie
1981-01-01
Studied the determinants of psychological propinquity and communication satisfaction in face-to-face, audio, and video conferences. Assessed the effect of a number of variables. Confirmed the importance of feedback in promoting communication satisfaction and the feeling of spatial closeness. (PD)
Comparison of Multidimensional Decoding of Affect from Audio, Video and Audiovideo Recordings
ERIC Educational Resources Information Center
Berman, Harry J.; And Others
1976-01-01
Some encoders showed variations in feelings principally through visually mediated stimuli, others through the tone of the voice. These results are discussed in the context of quantitative versus qualitative differences among the communication channels. (Author/DEP)