Sample records for video based analysis

  1. Content-based TV sports video retrieval using multimodal analysis

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  2. Content-based analysis of news video

    NASA Astrophysics Data System (ADS)

    Yu, Junqing; Zhou, Dongru; Liu, Huayong; Cai, Bo

    2001-09-01

    In this paper, we present a schema for content-based analysis of broadcast news video. First, we separate commercials from news using audiovisual features. Then, we automatically organize news programs into a content hierarchy at various levels of abstraction via effective integration of video, audio, and text data available from the news programs. Based on these news video structure and content analysis technologies, a TV news video Library is generated, from which users can retrieve definite news story according to their demands.

  3. Player-Driven Video Analysis to Enhance Reflective Soccer Practice in Talent Development

    ERIC Educational Resources Information Center

    Hjort, Anders; Henriksen, Kristoffer; Elbæk, Lars

    2018-01-01

    In the present article, we investigate the introduction of a cloud-based video analysis platform called Player Universe (PU). Video analysis is not a new performance-enhancing element in sports, but PU is innovative in how it facilitates reflective learning. Video analysis is executed in the PU platform by involving the players in the analysis…

  4. Video content analysis of surgical procedures.

    PubMed

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  5. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    PubMed

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. A Web-Based Video Digitizing System for the Study of Projectile Motion.

    ERIC Educational Resources Information Center

    Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.

    2000-01-01

    Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)

  7. Levels of Interaction and Proximity: Content Analysis of Video-Based Classroom Cases

    ERIC Educational Resources Information Center

    Kale, Ugur

    2008-01-01

    This study employed content analysis techniques to examine video-based cases of two websites that exemplify learner-centered pedagogies for pre-service teachers to carry out in their teaching practices. The study focused on interaction types and physical proximity levels between students and teachers observed in the videos. The findings regarding…

  8. User-oriented summary extraction for soccer video based on multimodal analysis

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  9. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  10. Constructing storyboards based on hierarchical clustering analysis

    NASA Astrophysics Data System (ADS)

    Hasebe, Satoshi; Sami, Mustafa M.; Muramatsu, Shogo; Kikuchi, Hisakazu

    2005-07-01

    There are growing needs for quick preview of video contents for the purpose of improving accessibility of video archives as well as reducing network traffics. In this paper, a storyboard that contains a user-specified number of keyframes is produced from a given video sequence. It is based on hierarchical cluster analysis of feature vectors that are derived from wavelet coefficients of video frames. Consistent use of extracted feature vectors is the key to avoid a repetition of computationally-intensive parsing of the same video sequence. Experimental results suggest that a significant reduction in computational time is gained by this strategy.

  11. The Physics of Osmos

    ERIC Educational Resources Information Center

    Vanden Heuvel, Andrew

    2016-01-01

    We describe an analysis of the conservation of momentum in the video game Osmos, which demonstrates that the potential of video game analysis extends far beyond kinematics. This analysis can serve as the basis of an inquiry momentum lab that combines interesting derivations, video-based data collection, and insights into the subtle decisions that…

  12. Motion based parsing for video from observational psychology

    NASA Astrophysics Data System (ADS)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  13. Development of a web-based video management and application processing system

    NASA Astrophysics Data System (ADS)

    Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting

    2001-07-01

    How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.

  14. Robust video copy detection approach based on local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Nie, Xiushan; Qiao, Qianping

    2012-04-01

    We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.

  15. Retrospective Video Analysis: A Reflective Tool for Teachers and Teacher Educators

    ERIC Educational Resources Information Center

    Mosley Wetzel, Melissa; Maloch, Beth; Hoffman, James V.

    2017-01-01

    Teachers may need tools to use video for reflection toward ongoing toward education and teacher leadership. Based on Goodman's (1996) notion of retrospective miscue analysis, a method of reading instruction that revalues the reader and his or her strategies, retrospective video analysis guides teachers in appreciating and understanding their own…

  16. Motion video analysis using planar parallax

    NASA Astrophysics Data System (ADS)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  17. A coach's political use of video-based feedback: a case study in elite-level academy soccer.

    PubMed

    Booroff, Michael; Nelson, Lee; Potrac, Paul

    2016-01-01

    This paper examines the video-based pedagogical practices of Terry (pseudonym), a head coach of a professional junior academy squad. Data were collected through 6 in-depth, semi-structured interviews and 10 field observations of Terry's video-based coaching in situ. Three embracing categories were generated from the data. These demonstrated that Terry's video-based coaching was far from apolitical. Rather, Terry strategically used performance analysis technologies to help fulfil various objectives and outcomes that he understood to be expected of him within the club environment. Kelchtermans' micropolitical perspective, Callero's work addressing role and Groom et al.'s grounded theory were primarily utilised to make sense of Terry's perceptions and actions. The findings point to the value of developing contextually grounded understandings of coaches' uses of video-based performance analysis technology. Doing so could better prepare coaches for this aspect of their coaching practice.

  18. An Evidence-Based Videotaped Running Biomechanics Analysis.

    PubMed

    Souza, Richard B

    2016-02-01

    Running biomechanics play an important role in the development of injuries. Performing a running biomechanics analysis on injured runners can help to develop treatment strategies. This article provides a framework for a systematic video-based running biomechanics analysis plan based on the current evidence on running injuries, using 2-dimensional (2D) video and readily available tools. Fourteen measurements are proposed in this analysis plan from lateral and posterior video. Identifying simple 2D surrogates for 3D biomechanic variables of interest allows for widespread translation of best practices, and have the best opportunity to impact the highly prevalent problem of the injured runner. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  20. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    PubMed

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  1. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  2. The Impact of Video Review on Supervisory Conferencing

    ERIC Educational Resources Information Center

    Baecher, Laura; McCormack, Bede

    2015-01-01

    This study investigated how video-based observation may alter the nature of post-observation talk between supervisors and teacher candidates. Audio-recorded post-observation conversations were coded using a conversation analysis framework and interpreted through the lens of interactional sociology. Findings suggest that video-based observations…

  3. The Physics of Osmos

    NASA Astrophysics Data System (ADS)

    Vanden Heuvel, Andrew

    2016-03-01

    We describe an analysis of the conservation of momentum in the video game Osmos, which demonstrates that the potential of video game analysis extends far beyond kinematics. This analysis can serve as the basis of an inquiry momentum lab that combines interesting derivations, video-based data collection, and insights into the subtle decisions that game developers must make to balance realistic physics and enjoyable gameplay.

  4. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  5. Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress

    PubMed Central

    Fu, Longwen; Liu, Zuoyi

    2018-01-01

    Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612

  6. Behavior analysis of video object in complicated background

    NASA Astrophysics Data System (ADS)

    Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang

    2016-10-01

    This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.

  7. Content-based management service for medical videos.

    PubMed

    Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre

    2013-01-01

    Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.

  8. The Systems Engineering Design of a Smart Forward Operating Base Surveillance System for Forward Operating Base Protection

    DTIC Science & Technology

    2013-06-01

    fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal

  9. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  10. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  11. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  12. An Examination of the Effects of a Video-Based Training Package on Professional Staff's Implementation of a Brief Functional Analysis and Data Analysis

    ERIC Educational Resources Information Center

    Fleming, Courtney V.

    2011-01-01

    Minimal research has investigated training packages used to teach professional staff how to implement functional analysis procedures and to interpret data gathered during functional analysis. The current investigation used video-based training with role-play and feedback to teach six professionals in a clinical setting to implement procedures of a…

  13. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1990-01-01

    In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.

  14. YouTube as a source of information on skin bleaching: a content analysis.

    PubMed

    Basch, C H; Brown, A A; Fullwood, M D; Clark, A; Fung, I C-H; Yin, J

    2018-06-01

    Skin bleaching is a common, yet potentially harmful body modification practice. To describe the characteristics of the most widely viewed YouTube™ videos related to skin bleaching. The search term 'skin bleaching' was used to identify the 100 most popular English-language YouTube videos relating to the topic. Both descriptive and specific information were noted. Among the 100 manually coded skin-bleaching YouTube videos in English, there were 21 consumer-created videos, 45 internet-based news videos, 30 television news videos and 4 professional videos. Excluding the 4 professional videos, we limited our content categorization and regression analysis to 96 videos. Approximately 93% (89/96) of the most widely viewed videos mentioned changing how you look and 74% (71/96) focused on bleaching the whole body. Of the 96 videos, 63 (66%) of videos showed/mentioned a transformation. Only about 14% (13/96) mentioned that skin bleaching is unsafe. The likelihood of a video selling a skin bleaching product was 17 times higher in internet videos compared with consumer videos (OR = 17.00, 95% CI 4.58-63.09, P < 0.001). Consumer-generated videos were about seven times more likely to mention making bleaching products at home compared with internet-based news videos (OR = 6.86, 95% CI 1.77-26.59, P < 0.01). The most viewed YouTube video on skin bleaching was uploaded by an internet source. Videos made by television sources mentioned more information about skin bleaching being unsafe, while consumer-generated videos focused more on making skin-bleaching products at home. © 2017 British Association of Dermatologists.

  15. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  16. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  17. The Role of Lesson Analysis in Pre-Service Teacher Education: An Empirical Investigation of Teacher Learning from a Virtual Video-Based Field Experience

    ERIC Educational Resources Information Center

    Santagata, Rossella; Zannoni, Claudia; Stigler, James W.

    2007-01-01

    A video-based program on lesson analysis for pre-service mathematics teachers was implemented for two consecutive years as part of a teacher education program at the University of Lazio, Italy. Two questions were addressed: What can preservice teachers learn from the analysis of videotaped lessons? How can preservice teachers' analysis ability,…

  18. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  19. A Meta-Analysis of Video Modeling Interventions for Children and Adolescents with Emotional/Behavioral Disorders

    ERIC Educational Resources Information Center

    Clinton, Elias

    2016-01-01

    Video modeling is a non-punitive, evidence-based intervention that has been proven effective for teaching functional life skills and social skills to individuals with autism and developmental disabilities. Compared to the literature base on using video modeling for students with autism and developmental disabilities, fewer studies have examined…

  20. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  1. Validation of a new method for finding the rotational axes of the knee using both marker-based roentgen stereophotogrammetric analysis and 3D video-based motion analysis for kinematic measurements.

    PubMed

    Roland, Michelle; Hull, M L; Howell, S M

    2011-05-01

    In a previous paper, we reported the virtual axis finder, which is a new method for finding the rotational axes of the knee. The virtual axis finder was validated through simulations that were subject to limitations. Hence, the objective of the present study was to perform a mechanical validation with two measurement modalities: 3D video-based motion analysis and marker-based roentgen stereophotogrammetric analysis (RSA). A two rotational axis mechanism was developed, which simulated internal-external (or longitudinal) and flexion-extension (FE) rotations. The actual axes of rotation were known with respect to motion analysis and RSA markers within ± 0.0006 deg and ± 0.036 mm and ± 0.0001 deg and ± 0.016 mm, respectively. The orientation and position root mean squared errors for identifying the longitudinal rotation (LR) and FE axes with video-based motion analysis (0.26 deg, 0.28 m, 0.36 deg, and 0.25 mm, respectively) were smaller than with RSA (1.04 deg, 0.84 mm, 0.82 deg, and 0.32 mm, respectively). The random error or precision in the orientation and position was significantly better (p=0.01 and p=0.02, respectively) in identifying the LR axis with video-based motion analysis (0.23 deg and 0.24 mm) than with RSA (0.95 deg and 0.76 mm). There was no significant difference in the bias errors between measurement modalities. In comparing the mechanical validations to virtual validations, the virtual validations produced comparable errors to those of the mechanical validation. The only significant difference between the errors of the mechanical and virtual validations was the precision in the position of the LR axis while simulating video-based motion analysis (0.24 mm and 0.78 mm, p=0.019). These results indicate that video-based motion analysis with the equipment used in this study is the superior measurement modality for use with the virtual axis finder but both measurement modalities produce satisfactory results. The lack of significant differences between validation techniques suggests that the virtual sensitivity analysis previously performed was appropriately modeled. Thus, the virtual axis finder can be applied with a thorough understanding of its errors in a variety of test conditions.

  2. Estimation of low back moments from video analysis: a validation study.

    PubMed

    Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H

    2011-09-02

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Hu; Wu, Ja-Ling

    With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.

  4. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Prediction of advertisement preference by fusing EEG response and sentiment analysis.

    PubMed

    Gauba, Himaanshu; Kumar, Pradeep; Roy, Partha Pratim; Singh, Priyanka; Dogra, Debi Prosad; Raman, Balasubramanian

    2017-08-01

    This paper presents a novel approach to predict rating of video-advertisements based on a multimodal framework combining physiological analysis of the user and global sentiment-rating available on the internet. We have fused Electroencephalogram (EEG) waves of user and corresponding global textual comments of the video to understand the user's preference more precisely. In our framework, the users were asked to watch the video-advertisement and simultaneously EEG signals were recorded. Valence scores were obtained using self-report for each video. A higher valence corresponds to intrinsic attractiveness of the user. Furthermore, the multimedia data that comprised of the comments posted by global viewers, were retrieved and processed using Natural Language Processing (NLP) technique for sentiment analysis. Textual contents from review comments were analyzed to obtain a score to understand sentiment nature of the video. A regression technique based on Random forest was used to predict the rating of an advertisement using EEG data. Finally, EEG based rating is combined with NLP-based sentiment score to improve the overall prediction. The study was carried out using 15 video clips of advertisements available online. Twenty five participants were involved in our study to analyze our proposed system. The results are encouraging and these suggest that the proposed multimodal approach can achieve lower RMSE in rating prediction as compared to the prediction using only EEG data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Descriptive analysis of YouTube music therapy videos.

    PubMed

    Gooding, Lori F; Gregory, Dianne

    2011-01-01

    The purpose of this study was to conduct a descriptive analysis of music therapy-related videos on YouTube. Preliminary searches using the keywords music therapy, music therapy session, and "music therapy session" resulted in listings of 5000, 767, and 59 videos respectively. The narrowed down listing of 59 videos was divided between two investigators and reviewed in order to determine their relationship to actual music therapy practice. A total of 32 videos were determined to be depictions of music therapy sessions. These videos were analyzed using a 16-item investigator-created rubric that examined both video specific information and therapy specific information. Results of the analysis indicated that audio and visual quality was adequate, while narrative descriptions and identification information were ineffective in the majority of the videos. The top 5 videos (based on the highest number of viewings in the sample) were selected for further analysis in order to investigate demonstration of the Professional Level of Practice Competencies set forth in the American Music Therapy Association (AMTA) Professional Competencies (AMTA, 2008). Four of the five videos met basic competency criteria, with the quality of the fifth video precluding evaluation of content. Of particular interest is the fact that none of the videos included credentialing information. Results of this study suggest the need to consider ways to ensure accurate dissemination of music therapy-related information in the YouTube environment, ethical standards when posting music therapy session videos, and the possibility of creating AMTA standards for posting music therapy related video.

  7. Automatic video segmentation and indexing

    NASA Astrophysics Data System (ADS)

    Chahir, Youssef; Chen, Liming

    1999-08-01

    Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.

  8. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    PubMed

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  9. Automated video-based assessment of surgical skills for training and evaluation in medical schools.

    PubMed

    Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Ploetz, Thomas; Clements, Mark A; Essa, Irfan

    2016-09-01

    Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.

  10. Examining Feedback in an Instructional Video Game Using Process Data and Error Analysis. CRESST Report 817

    ERIC Educational Resources Information Center

    Buschang, Rebecca E.; Kerr, Deirdre S.; Chung, Gregory K. W. K.

    2012-01-01

    Appropriately designed technology-based learning environments such as video games can be used to give immediate and individualized feedback to students. However, little is known about the design and use of feedback in instructional video games. This study investigated how feedback used in a mathematics video game about fractions impacted student…

  11. Moderating Factors of Video-Modeling with Other as Model: A Meta-Analysis of Single-Case Studies

    ERIC Educational Resources Information Center

    Mason, Rose A.; Ganz, Jennifer B.; Parker, Richard I.; Burke, Mack D.; Camargo, Siglia P.

    2012-01-01

    Video modeling with other as model (VMO) is a more practical method for implementing video-based modeling techniques, such as video self-modeling, which requires significantly more editing. Despite this, identification of contextual factors such as participant characteristics and targeted outcomes that moderate the effectiveness of VMO has not…

  12. Learning to Notice Mathematics Instruction: Using Video to Develop Preservice Teachers' Vision of Ambitious Pedagogy

    ERIC Educational Resources Information Center

    van Es, Elizabeth A.; Cashen, Mary; Barnhart, Tara; Auger, Anamarie

    2017-01-01

    Video is used extensively in teacher preparation, raising questions about what and how preservice teachers learn through video observation and analysis. We investigate the development of candidates' noticing of ambitious mathematics pedagogy in the context of a video-based course designed to cultivate ways of seeing and interpreting classroom…

  13. Concept indexing and expansion for social multimedia websites based on semantic processing and graph analysis

    NASA Astrophysics Data System (ADS)

    Lin, Po-Chuan; Chen, Bo-Wei; Chang, Hangbae

    2016-07-01

    This study presents a human-centric technique for social video expansion based on semantic processing and graph analysis. The objective is to increase metadata of an online video and to explore related information, thereby facilitating user browsing activities. To analyze the semantic meaning of a video, shots and scenes are firstly extracted from the video on the server side. Subsequently, this study uses annotations along with ConceptNet to establish the underlying framework. Detailed metadata, including visual objects and audio events among the predefined categories, are indexed by using the proposed method. Furthermore, relevant online media associated with each category are also analyzed to enrich the existing content. With the above-mentioned information, users can easily browse and search the content according to the link analysis and its complementary knowledge. Experiments on a video dataset are conducted for evaluation. The results show that our system can achieve satisfactory performance, thereby demonstrating the feasibility of the proposed idea.

  14. Instructional analysis of lecture video recordings and its application for quality improvement of medical lectures.

    PubMed

    Baek, Sunyong; Im, Sun Ju; Lee, Sun Hee; Kam, Beesung; Yune, So Joung; Lee, Sang Soo; Lee, Jung A; Lee, Yuna; Lee, Sang Yeoup

    2011-12-01

    The lecture is a technique for delivering knowledge and information cost-effectively to large medical classes in medical education. The aim of this study was to analyze teaching quality, based on triangle analysis of video recordings of medical lectures, to strengthen teaching competency in medical school. The subjects of this study were 13 medical professors who taught 1st- and 2nd-year medical students and agreed to a triangle analysis of video recordings of their lectures. We first performed triangle analysis, which consisted of a professional analysis of video recordings, self-assessment by teaching professors, and feedback from students, and the data were crosschecked by five school consultants for reliability and consistency. Most of the distress that teachers experienced during the lecture occurred in uniform teaching environments, such as larger lecture classes. Larger lectures that primarily used PowerPoint as a medium to deliver information effected poor interaction with students. Other distressing factors in the lecture were personal characteristics and lack of strategic faculty development. Triangle analysis of video recordings of medical lectures gives teachers an opportunity and motive to improve teaching quality. Faculty development and various improvement strategies, based on this analysis, are expected to help teachers succeed as effective, efficient, and attractive lecturers while improving the quality of larger lecture classes.

  15. Watch what happens: using a web-based multimedia platform to enhance intraoperative learning and development of clinical reasoning.

    PubMed

    Fingeret, Abbey L; Martinez, Rebecca H; Hsieh, Christine; Downey, Peter; Nowygrod, Roman

    2016-02-01

    We aim to determine whether observed operations or internet-based video review predict improved performance in the surgery clerkship. A retrospective review of students' usage of surgical videos, observed operations, evaluations, and examination scores were used to construct an exploratory principal component analysis. Multivariate regression was used to determine factors predictive of clerkship performance. Case log data for 231 students revealed a median of 25 observed cases. Students accessed the web-based video platform a median of 15 times. Principal component analysis yielded 4 factors contributing 74% of the variability with a Kaiser-Meyer-Olkin coefficient of .83. Multivariate regression predicted shelf score (P < .0001), internal clinical skills examination score (P < .0001), subjective evaluations (P < .001), and video website utilization (P < .001) but not observed cases to be significantly associated with overall performance. Utilization of a web-based operative video platform during a surgical clerkship is an independently associated with improved clinical reasoning, fund of knowledge, and overall evaluation. Thus, this modality can serve as a useful adjunct to live observation. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Application of Integral Optical Flow for Determining Crowd Movement from Video Images Obtained Using Video Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Chen, H.; Ye, Sh.; Nedzvedz, O. V.; Ablameyko, S. V.

    2018-03-01

    Study of crowd movement is an important practical problem, and its solution is used in video surveillance systems for preventing various emergency situations. In the general case, a group of fast-moving people is of more interest than a group of stationary or slow-moving people. We propose a new method for crowd movement analysis using a video sequence, based on integral optical flow. We have determined several characteristics of a moving crowd such as density, speed, direction of motion, symmetry, and in/out index. These characteristics are used for further analysis of a video scene.

  17. Colonoscopy video quality assessment using hidden Markov random fields

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.

  18. Further Exploration of the Classroom Video Analysis (CVA) Instrument as a Measure of Usable Knowledge for Teaching Mathematics: Taking a Knowledge System Perspective

    ERIC Educational Resources Information Center

    Kersting, Nicole B.; Sutton, Taliesin; Kalinec-Craig, Crystal; Stoehr, Kathleen Jablon; Heshmati, Saeideh; Lozano, Guadalupe; Stigler, James W.

    2016-01-01

    In this article we report further explorations of the classroom video analysis instrument (CVA), a measure of usable teacher knowledge based on scoring teachers' written analyses of classroom video clips. Like other researchers, our work thus far has attempted to identify and measure separable components of teacher knowledge. In this study we take…

  19. Computer-based video analysis identifies infants with absence of fidgety movements.

    PubMed

    Støen, Ragnhild; Songstad, Nils Thomas; Silberg, Inger Elisabeth; Fjørtoft, Toril; Jensenius, Alexander Refsum; Adde, Lars

    2017-10-01

    BackgroundAbsence of fidgety movements (FMs) at 3 months' corrected age is a strong predictor of cerebral palsy (CP) in high-risk infants. This study evaluates the association between computer-based video analysis and the temporal organization of FMs assessed with the General Movement Assessment (GMA).MethodsInfants were eligible for this prospective cohort study if referred to a high-risk follow-up program in a participating hospital. Video recordings taken at 10-15 weeks post term age were used for GMA and computer-based analysis. The variation of the spatial center of motion, derived from differences between subsequent video frames, was used for quantitative analysis.ResultsOf 241 recordings from 150 infants, 48 (24.1%) were classified with absence of FMs or sporadic FMs using the GMA. The variation of the spatial center of motion (C SD ) during a recording was significantly lower in infants with normal (0.320; 95% confidence interval (CI) 0.309, 0.330) vs. absence of or sporadic (0.380; 95% CI 0.361, 0.398) FMs (P<0.001). A triage model with C SD thresholds chosen for sensitivity of 90% and specificity of 80% gave a 40% referral rate for GMA.ConclusionQuantitative video analysis during the FMs' period can be used to triage infants at high risk of CP to early intervention or observational GMA.

  20. Using computer-based video analysis in the study of fidgety movements.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild

    2009-09-01

    Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.

  1. Zika Virus on YouTube: An Analysis of English-language Video Content by Source

    PubMed Central

    2017-01-01

    Objectives The purpose of this study was to describe the source, length, number of views, and content of the most widely viewed Zika virus (ZIKV)-related YouTube videos. We hypothesized that ZIKV-related videos uploaded by different sources contained different content. Methods The 100 most viewed English ZIKV-related videos were manually coded and analyzed statistically. Results Among the 100 videos, there were 43 consumer-generated videos, 38 Internet-based news videos, 15 TV-based news videos, and 4 professional videos. Internet news sources captured over two-thirds of the total of 8 894 505 views. Compared with consumer-generated videos, Internet-based news videos were more likely to mention the impact of ZIKV on babies (odds ratio [OR], 6.25; 95% confidence interval [CI], 1.64 to 23.76), the number of cases in Latin America (OR, 5.63; 95% CI, 1.47 to 21.52); and ZIKV in Africa (OR, 2.56; 95% CI, 1.04 to 6.31). Compared with consumer-generated videos, TV-based news videos were more likely to express anxiety or fear of catching ZIKV (OR, 6.67; 95% CI, 1.36 to 32.70); to highlight fear of ZIKV among members of the public (OR, 7.45; 95% CI, 1.20 to 46.16); and to discuss avoiding pregnancy (OR, 3.88; 95% CI, 1.13 to 13.25). Conclusions Public health agencies should establish a larger presence on YouTube to reach more people with evidence-based information about ZIKV. PMID:28372356

  2. Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.

    PubMed

    Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A

    2018-01-01

    Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.

  3. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  4. The Effect of Motion Analysis Activities in a Video-Based Laboratory in Students' Understanding of Position, Velocity and Frames of Reference

    ERIC Educational Resources Information Center

    Koleza, Eugenia; Pappas, John

    2008-01-01

    In this article, we present the results of a qualitative research project on the effect of motion analysis activities in a Video-Based Laboratory (VBL) on students' understanding of position, velocity and frames of reference. The participants in our research were 48 pre-service teachers enrolled in Education Departments with no previous strong…

  5. An Evaluation of a Diabetes Self-Management Education (DSME) Intervention Delivered Using Avatar-Based Technology: Certified Diabetes Educators' Ratings and Perceptions.

    PubMed

    Duncan-Carnesciali, Joanne; Wallace, Barbara C; Odlum, Michelle

    2018-06-01

    Purpose The purpose of this study was to evaluate the perceptions that certified diabetes educators (CDEs), of diverse health professions, have of a culturally appropriate e-health intervention that used avatar-based technology. Methods Cross-sectional, survey-based design using quantitative and qualitative paradigms. A logic model framed the study, which centered on the broad and general concepts leading to study outcomes. In total, 198 CDEs participated in the evaluation. Participants were mostly female and represented an age range of 26 to 76 years. The profession representative of the sample was registered nurses. Study setting and data collection occurred at https://www.surveymonkey.com/r/AvatarVideoSurvey-for-Certified_Diabetes_Educators . Study instruments used were the Basic Demographics Survey (BD-13), Educational Material Use and Rating of Quality Scale (EMU-ROQ-9), Marlowe-Crowne Social Desirability Survey (MS-SOC-DES-CDE-13), Quality of Avatar Video Rating Scale (QAVRS-7), Recommend Avatar to Patients Scale (RAVTPS-3), Recommend Avatar Video to Health Professionals Scale (RAVTHP-3), and Avatar Video Applications Scale (AVAPP-1). Statistical analysis used included t tests, Pearson product moment correlations, backward stepwise regression, and content/thematic analysis. Results Age, ethnicity, Arab/Middle Eastern, Asian, and white/European descents were significant predictors of a high-quality rating of the video. Thematic and content analysis of the data revealed an overall positive perception of the video. Conclusions An e-health intervention grounded in evidence-based health behavior theories has potential to increase access to diabetes self-management education as evidenced in the ratings and perceptions of the video by CDEs.

  6. Complementing Operating Room Teaching With Video-Based Coaching.

    PubMed

    Hu, Yue-Yung; Mazer, Laura M; Yule, Steven J; Arriaga, Alexander F; Greenberg, Caprice C; Lipsitz, Stuart R; Gawande, Atul A; Smink, Douglas S

    2017-04-01

    Surgical expertise demands technical and nontechnical skills. Traditionally, surgical trainees acquired these skills in the operating room; however, operative time for residents has decreased with duty hour restrictions. As in other professions, video analysis may help maximize the learning experience. To develop and evaluate a postoperative video-based coaching intervention for residents. In this mixed methods analysis, 10 senior (postgraduate year 4 and 5) residents were videorecorded operating with an attending surgeon at an academic tertiary care hospital. Each video formed the basis of a 1-hour one-on-one coaching session conducted by the operative attending; although a coaching framework was provided, participants determined the specific content collaboratively. Teaching points were identified in the operating room and the video-based coaching sessions; iterative inductive coding, followed by thematic analysis, was performed. Teaching points made in the operating room were compared with those in the video-based coaching sessions with respect to initiator, content, and teaching technique, adjusting for time. Among 10 cases, surgeons made more teaching points per unit time (63.0 vs 102.7 per hour) while coaching. Teaching in the video-based coaching sessions was more resident centered; attendings were more inquisitive about residents' learning needs (3.30 vs 0.28, P = .04), and residents took more initiative to direct their education (27% [198 of 729 teaching points] vs 17% [331 of 1977 teaching points], P < .001). Surgeons also more frequently validated residents' experiences (8.40 vs 1.81, P < .01), and they tended to ask more questions to promote critical thinking (9.30 vs 3.32, P = .07) and set more learning goals (2.90 vs 0.28, P = .11). More complex topics, including intraoperative decision making (mean, 9.70 vs 2.77 instances per hour, P = .03) and failure to progress (mean, 1.20 vs 0.13 instances per hour, P = .04) were addressed, and they were more thoroughly developed and explored. Excerpts of dialogue are presented to illustrate these findings. Video-based coaching is a novel and feasible modality for supplementing intraoperative learning. Objective evaluation demonstrates that video-based coaching may be particularly useful for teaching higher-level concepts, such as decision making, and for individualizing instruction and feedback to each resident.

  7. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  8. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  9. Obesity in the new media: a content analysis of obesity videos on YouTube.

    PubMed

    Yoo, Jina H; Kim, Junghyun

    2012-01-01

    This study examines (1) how the topics of obesity are framed and (2) how obese persons are portrayed on YouTube video clips. The analysis of 417 obesity videos revealed that a newer medium like YouTube, similar to traditional media, appeared to assign responsibility and solutions for obesity mainly to individuals and their behaviors, although there was a tendency that some video categories have started to show other causal claims or solutions. However, due to the prevailing emphasis on personal causes and solutions, numerous YouTube videos had a theme of weight-based teasing, or showed obese persons engaging in stereotypical eating behaviors. We discuss a potential impact of YouTube videos on shaping viewers' perceptions about obesity and further reinforcing stigmatization of obese persons.

  10. Overlaid caption extraction in news video based on SVM

    NASA Astrophysics Data System (ADS)

    Liu, Manman; Su, Yuting; Ji, Zhong

    2007-11-01

    Overlaid caption in news video often carries condensed semantic information which is key cues for content-based video indexing and retrieval. However, it is still a challenging work to extract caption from video because of its complex background and low resolution. In this paper, we propose an effective overlaid caption extraction approach for news video. We first scan the video key frames using a small window, and then classify the blocks into the text and non-text ones via support vector machine (SVM), with statistical features extracted from the gray level co-occurrence matrices, the LH and HL sub-bands wavelet coefficients and the orientated edge intensity ratios. Finally morphological filtering and projection profile analysis are employed to localize and refine the candidate caption regions. Experiments show its high performance on four 30-minute news video programs.

  11. 2011 Tohoku tsunami hydrographs, currents, flow velocities and ship tracks based on video and TLS measurements

    NASA Astrophysics Data System (ADS)

    Fritz, Hermann M.; Phillips, David A.; Okayasu, Akio; Shimozono, Takenori; Liu, Haijiang; Takeda, Seiichi; Mohammed, Fahad; Skanavis, Vassilis; Synolakis, Costas E.; Takahashi, Tomoyuki

    2013-04-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the Tohoku coast of Japan caused catastrophic damage and loss of life to a tsunami aware population. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided fragmented spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s were measured in Kesennuma Bay making navigation impossible (Fritz et al., 2012). Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to minus 10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities. Lastly a perspective on the recovery and reconstruction process is provided based on numerous revisits of identical sites between April 2011 and July 2012.

  12. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  13. Improving Video Based Heart Rate Monitoring.

    PubMed

    Lin, Jian; Rozado, David; Duenser, Andreas

    2015-01-01

    Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.

  14. Is it acceptable to video-record palliative care consultations for research and training purposes? A qualitative interview study exploring the views of hospice patients, carers and clinical staff.

    PubMed

    Pino, Marco; Parry, Ruth; Feathers, Luke; Faull, Christina

    2017-09-01

    Research using video recordings can advance understanding of healthcare communication and improve care, but making and using video recordings carries risks. To explore views of hospice patients, carers and clinical staff about whether videoing patient-doctor consultations is acceptable for research and training purposes. We used semi-structured group and individual interviews to gather hospice patients, carers and clinical staff views. We used Braun and Clark's thematic analysis. Interviews were conducted at one English hospice to inform the development of a larger video-based study. We invited patients with capacity to consent and whom the care team judged were neither acutely unwell nor severely distressed (11), carers of current or past patients (5), palliative medicine doctors (7), senior nurses (4) and communication skills educators (5). Participants viewed video-based research on communication as valuable because of its potential to improve communication, care and staff training. Video-based research raised concerns including its potential to affect the nature and content of the consultation and threats to confidentiality; however, these were not seen as sufficient grounds for rejecting video-based research. Video-based research was seen as acceptable and useful providing that measures are taken to reduce possible risks across the recruitment, recording and dissemination phases of the research process. Video-based research is an acceptable and worthwhile way of investigating communication in palliative medicine. Situated judgements should be made about when it is appropriate to involve individual patients and carers in video-based research on the basis of their level of vulnerability and ability to freely consent.

  15. Is it acceptable to video-record palliative care consultations for research and training purposes? A qualitative interview study exploring the views of hospice patients, carers and clinical staff

    PubMed Central

    Pino, Marco; Parry, Ruth; Feathers, Luke; Faull, Christina

    2017-01-01

    Background: Research using video recordings can advance understanding of healthcare communication and improve care, but making and using video recordings carries risks. Aim: To explore views of hospice patients, carers and clinical staff about whether videoing patient–doctor consultations is acceptable for research and training purposes. Design: We used semi-structured group and individual interviews to gather hospice patients, carers and clinical staff views. We used Braun and Clark’s thematic analysis. Setting/participants: Interviews were conducted at one English hospice to inform the development of a larger video-based study. We invited patients with capacity to consent and whom the care team judged were neither acutely unwell nor severely distressed (11), carers of current or past patients (5), palliative medicine doctors (7), senior nurses (4) and communication skills educators (5). Results: Participants viewed video-based research on communication as valuable because of its potential to improve communication, care and staff training. Video-based research raised concerns including its potential to affect the nature and content of the consultation and threats to confidentiality; however, these were not seen as sufficient grounds for rejecting video-based research. Video-based research was seen as acceptable and useful providing that measures are taken to reduce possible risks across the recruitment, recording and dissemination phases of the research process. Conclusion: Video-based research is an acceptable and worthwhile way of investigating communication in palliative medicine. Situated judgements should be made about when it is appropriate to involve individual patients and carers in video-based research on the basis of their level of vulnerability and ability to freely consent. PMID:28590153

  16. Detection of goal events in soccer videos

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  17. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    PubMed

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  18. Two novel motion-based algorithms for surveillance video analysis on embedded platforms

    NASA Astrophysics Data System (ADS)

    Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.

    2010-05-01

    This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.

  19. Effects of video-based, online education on behavioral and knowledge outcomes in sunscreen use: a randomized controlled trial.

    PubMed

    Armstrong, April W; Idriss, Nayla Z; Kim, Randie H

    2011-05-01

    To compare online video and pamphlet education at improving patient comprehension and adherence to sunscreen use, and to assess patient satisfaction with the two educational approaches. In a randomized controlled trial, 94 participants received either online, video-based education or pamphlet-based education that described the importance and proper use of sunscreen. Sun protective knowledge and sunscreen application behaviors were assessed at baseline and 12 weeks after group-specific intervention. Participants in both groups had similar levels of baseline sunscreen knowledge. Post-study analysis revealed significantly greater improvement in the knowledge scores from video group members compared to the pamphlet group (p=0.003). More importantly, video group participants reported greater sunscreen adherence (p<0.001). Finally, the video group rated their education vehicle more useful and appealing than the pamphlet group (p<0.001), and video group participants referred to the video more frequently (p=0.018). Video-based learning is a more effective educational tool for teaching sun protective knowledge and encouraging sunscreen use than written materials. More effective patient educational methods to encourage sun protection activities, such as regular sunscreen use, have the potential to increase awareness and foster positive, preventative health behaviors against skin cancers. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  20. 2011 Tohoku tsunami video and TLS based measurements: hydrographs, currents, inundation flow velocities, and ship tracks

    NASA Astrophysics Data System (ADS)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Takeda, S.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-12-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of the Tohoku region caused catastrophic damage and loss of life in Japan. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided spontaneous spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s per second were measured in Kesennuma Bay making navigation impossible. Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to -10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities.;

  1. Long Term Activity Analysis in Surveillance Video Archives

    ERIC Educational Resources Information Center

    Chen, Ming-yu

    2010-01-01

    Surveillance video recording is becoming ubiquitous in daily life for public areas such as supermarkets, banks, and airports. The rate at which surveillance video is being generated has accelerated demand for machine understanding to enable better content-based search capabilities. Analyzing human activity is one of the key tasks to understand and…

  2. No-reference video quality measurement: added value of machine learning

    NASA Astrophysics Data System (ADS)

    Mocanu, Decebal Constantin; Pokhrel, Jeevan; Garella, Juan Pablo; Seppänen, Janne; Liotou, Eirini; Narwaria, Manish

    2015-11-01

    Video quality measurement is an important component in the end-to-end video delivery chain. Video quality is, however, subjective, and thus, there will always be interobserver differences in the subjective opinion about the visual quality of the same video. Despite this, most existing works on objective quality measurement typically focus only on predicting a single score and evaluate their prediction accuracies based on how close it is to the mean opinion scores (or similar average based ratings). Clearly, such an approach ignores the underlying diversities in the subjective scoring process and, as a result, does not allow further analysis on how reliable the objective prediction is in terms of subjective variability. Consequently, the aim of this paper is to analyze this issue and present a machine-learning based solution to address it. We demonstrate the utility of our ideas by considering the practical scenario of video broadcast transmissions with focus on digital terrestrial television (DTT) and proposing a no-reference objective video quality estimator for such application. We conducted meaningful verification studies on different video content (including video clips recorded from real DTT broadcast transmissions) in order to verify the performance of the proposed solution.

  3. A Meta-Analysis of Video-Modeling Based Interventions for Reduction of Challenging Behaviors for Students with EBD

    ERIC Educational Resources Information Center

    Losinski, Mickey; Wiseman, Nicole; White, Sherry A.; Balluch, Felicity

    2016-01-01

    The current study examined the use of video modeling (VM)-based interventions to reduce the challenging behaviors of students with emotional or behavioral disorders. Each study was evaluated using Council for Exceptional Children's (CEC's) quality indicators for evidence-based practices. In addition, study effects were calculated along the three…

  4. The reliability and validity of video analysis for the assessment of the clinical signs of concussion in Australian football.

    PubMed

    Makdissi, Michael; Davis, Gavin

    2016-10-01

    The objective of this study was to determine the reliability and validity of identifying clinical signs of concussion using video analysis in Australian football. Prospective cohort study. All impacts and collisions potentially resulting in a concussion were identified during 2012 and 2013 Australian Football League seasons. Consensus definitions were developed for clinical signs associated with concussion. For intra- and inter-rater reliability analysis, two experienced clinicians independently assessed 102 randomly selected videos on two occasions. Sensitivity, specificity, positive and negative predictive values were calculated based on the diagnosis provided by team medical staff. 212 incidents resulting in possible concussion were identified in 414 Australian Football League games. The intra-rater reliability of the video-based identification of signs associated with concussion was good to excellent. Inter-rater reliability was good to excellent for impact seizure, slow to get up, motor incoordination, ragdoll appearance (2 of 4 analyses), clutching at head and facial injury. Inter-rater reliability for loss of responsiveness and blank and vacant look was only fair and did not reach statistical significance. The feature with the highest sensitivity was slow to get up (87%), but this sign had a low specificity (19%). Other video signs had a high specificity but low sensitivity. Blank and vacant look (100%) and motor incoordination (81%) had the highest positive predictive value. Video analysis may be a useful adjunct to the side-line assessment of a possible concussion. Video analysis however should not replace the need for a thorough multimodal clinical assessment. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  5. Video attention deviation estimation using inter-frame visual saliency map analysis

    NASA Astrophysics Data System (ADS)

    Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng

    2012-01-01

    A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.

  6. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  7. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  8. Shaking video stabilization with content completion

    NASA Astrophysics Data System (ADS)

    Peng, Yi; Ye, Qixiang; Liu, Yanmei; Jiao, Jianbin

    2009-01-01

    A new stabilization algorithm to counterbalance the shaking motion in a video based on classical Kandade-Lucas- Tomasi (KLT) method is presented in this paper. Feature points are evaluated with law of large numbers and clustering algorithm to reduce the side effect of moving foreground. Analysis on the change of motion direction is also carried out to detect the existence of shaking. For video clips with detected shaking, an affine transformation is performed to warp the current frame to the reference one. In addition, the missing content of a frame during the stabilization is completed with optical flow analysis and mosaicking operation. Experiments on video clips demonstrate the effectiveness of the proposed algorithm.

  9. A scheme for racquet sports video analysis with the combination of audio-visual information

    NASA Astrophysics Data System (ADS)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  10. Using Videos and Multimodal Discourse Analysis to Study How Students Learn a Trade

    ERIC Educational Resources Information Center

    Chan, Selena

    2013-01-01

    The use of video to assist with ethnographical-based research is not a new phenomenon. Recent advances in technology have reduced the costs and technical expertise required to use videos for gathering research data. Audio-visual records of learning activities as they take place, allow for many non-vocal and inter-personal communication…

  11. The Influence of Video Technology in Adolescence. Media Panel Report No. 27.

    ERIC Educational Resources Information Center

    Roe, Keith

    This report provides a detailed analysis of the video use and preferences of Swedish adolescents based on data drawn from the Media Panel project, a three-wave, longitudinal research program on video use conducted at the Department of Sociology, The University of Lund, and the Department for Information Techniques, the University College of Vaxjo,…

  12. Developing Prospective Teachers' Diagnostic Skills through Collaborative Video Analysis: Focus on L2 Reading

    ERIC Educational Resources Information Center

    Finkbeiner, Claudia; Schluer, Jennifer

    2017-01-01

    This paper contains a collaborative video-based approach to foster prospective teachers' diagnostic skills with respect to pupils' L2 reading processes. Together with a peer, the prospective teachers watched, systematically selected, analysed and commented on clips from a comprehensive video corpus on L2 reading strategies. In order to assist the…

  13. Developing Interactional Competence through Video-Based Computer-Mediated Conversations: Beginning Learners of Spanish

    ERIC Educational Resources Information Center

    Tecedor Cabrero, Marta

    2013-01-01

    This dissertation examines the discourse produced by beginning learners of Spanish using social media. Specifically, it looks at the use and development of interactional resources during two video-mediated conversations. Through a combination of Conversation Analysis tools and quantitative data analysis, the use of turn-taking strategies, repair…

  14. Video-based eye tracking for neuropsychiatric assessment.

    PubMed

    Adhikari, Sam; Stark, David E

    2017-01-01

    This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.

  15. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  16. The experiments and analysis of several selective video encryption methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Yang, Cheng; Wang, Lei

    2013-07-01

    This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.

  17. An automated assay for the assessment of cardiac arrest in fish embryo.

    PubMed

    Puybareau, Elodie; Genest, Diane; Barbeau, Emilie; Léonard, Marc; Talbot, Hugues

    2017-02-01

    Studies on fish embryo models are widely developed in research. They are used in several research fields including drug discovery or environmental toxicology. In this article, we propose an entirely automated assay to detect cardiac arrest in Medaka (Oryzias latipes) based on image analysis. We propose a multi-scale pipeline based on mathematical morphology. Starting from video sequences of entire wells in 24-well plates, we focus on the embryo, detect its heart, and ascertain whether or not the heart is beating based on intensity variation analysis. Our image analysis pipeline only uses commonly available operators. It has a low computational cost, allowing analysis at the same rate as acquisition. From an initial dataset of 3192 videos, 660 were discarded as unusable (20.7%), 655 of them correctly so (99.25%) and only 5 incorrectly so (0.75%). The 2532 remaining videos were used for our test. On these, 45 errors were made, leading to a success rate of 98.23%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Video and accelerometer-based motion analysis for automated surgical skills assessment.

    PubMed

    Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan

    2018-03-01

    Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.

  19. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  20. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  1. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  2. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  3. In Pursuit of Reciprocity: Researchers, Teachers, and School Reformers Engaged in Collaborative Analysis of Video Records

    ERIC Educational Resources Information Center

    Curry, Marnie W.

    2012-01-01

    In the ideal, reciprocity in qualitative inquiry occurs when there is give-and-take between researchers and the researched; however, the demands of the academy and resource constraints often make the pursuit of reciprocity difficult. Drawing on two video-based, qualitative studies in which researchers utilized video records as resources to enhance…

  4. Connecting Classroom Practice to Concepts of Culturally Responsive Teaching: Video Analysis in an Online Teacher Education Course

    ERIC Educational Resources Information Center

    Lopez, Leslie Ann

    2013-01-01

    Video has been shown to be an effective tool for synthesizing theory and connecting theory to practice in meaningful ways. This design-based research study examined how localized video of a practicing teacher impacted pre-service teachers' ability to learn culturally responsive teaching (CRT) methods and targeted strategies in an online…

  5. Using learning analytics to evaluate a video-based lecture series.

    PubMed

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  6. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  7. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    ERIC Educational Resources Information Center

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  8. A video method to study Drosophila sleep.

    PubMed

    Zimmerman, John E; Raizen, David M; Maycock, Matthew H; Maislin, Greg; Pack, Allan I

    2008-11-01

    To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep.

  9. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    NASA Astrophysics Data System (ADS)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  10. An intelligent crowdsourcing system for forensic analysis of surveillance video

    NASA Astrophysics Data System (ADS)

    Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.

    2015-03-01

    Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.

  11. Video-task acquisition in rhesus monkeys (Macaca mulatta) and chimpanzees (Pan troglodytes): a comparative analysis

    NASA Technical Reports Server (NTRS)

    Hopkins, W. D.; Washburn, D. A.; Hyatt, C. W.; Rumbaugh, D. M. (Principal Investigator)

    1996-01-01

    This study describes video-task acquisition in two nonhuman primate species. The subjects were seven rhesus monkeys (Macaca mulatta) and seven chimpanzees (Pan troglodytes). All subjects were trained to manipulate a joystick which controlled a cursor displayed on a computer monitor. Two criterion levels were used: one based on conceptual knowledge of the task and one based on motor performance. Chimpanzees and rhesus monkeys attained criterion in a comparable number of trials using a conceptually based criterion. However, using a criterion based on motor performance, chimpanzees reached criterion significantly faster than rhesus monkeys. Analysis of error patterns and latency indicated that the rhesus monkeys had a larger asymmetry in response bias and were significantly slower in responding than the chimpanzees. The results are discussed in terms of the relation between object manipulation skills and video-task acquisition.

  12. Text Detection, Tracking and Recognition in Video: A Comprehensive Survey.

    PubMed

    Yin, Xu-Cheng; Zuo, Ze-Yu; Tian, Shu; Liu, Cheng-Lin

    2016-04-14

    Intelligent analysis of video data is currently in wide demand because video is a major source of sensory data in our lives. Text is a prominent and direct source of information in video, while recent surveys of text detection and recognition in imagery [1], [2] focus mainly on text extraction from scene images. Here, this paper presents a comprehensive survey of text detection, tracking and recognition in video with three major contributions. First, a generic framework is proposed for video text extraction that uniformly describes detection, tracking, recognition, and their relations and interactions. Second, within this framework, a variety of methods, systems and evaluation protocols of video text extraction are summarized, compared, and analyzed. Existing text tracking techniques, tracking based detection and recognition techniques are specifically highlighted. Third, related applications, prominent challenges, and future directions for video text extraction (especially from scene videos and web videos) are also thoroughly discussed.

  13. Video fingerprinting for copy identification: from research to industry applications

    NASA Astrophysics Data System (ADS)

    Lu, Jian

    2009-02-01

    Research that began a decade ago in video copy detection has developed into a technology known as "video fingerprinting". Today, video fingerprinting is an essential and enabling tool adopted by the industry for video content identification and management in online video distribution. This paper provides a comprehensive review of video fingerprinting technology and its applications in identifying, tracking, and managing copyrighted content on the Internet. The review includes a survey on video fingerprinting algorithms and some fundamental design considerations, such as robustness, discriminability, and compactness. It also discusses fingerprint matching algorithms, including complexity analysis, and approximation and optimization for fast fingerprint matching. On the application side, it provides an overview of a number of industry-driven applications that rely on video fingerprinting. Examples are given based on real-world systems and workflows to demonstrate applications in detecting and managing copyrighted content, and in monitoring and tracking video distribution on the Internet.

  14. Teachers' Reports of Learning and Application to Pedagogy Based on Engagement in Collaborative Peer Video Analysis

    ERIC Educational Resources Information Center

    Christ, Tanya; Arya, Poonam; Chiu, Ming Ming

    2014-01-01

    Given international use of video-based reflective discussions in teacher education, and the limited knowledge about whether teachers apply learning from these discussions, we explored teachers' learning of new ideas about pedagogy and their self-reported application of this learning. Nine inservice and 48 preservice teachers participated in…

  15. Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis

    DOE PAGES

    Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.; ...

    2017-10-16

    This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less

  16. Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.

    This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less

  17. A comparison of face to face and video-based education on attitude related to diet and fluids: Adherence in hemodialysis patients.

    PubMed

    Karimi Moonaghi, Hossein; Hasanzadeh, Farzaneh; Shamsoddini, Somayyeh; Emamimoghadam, Zahra; Ebrahimzadeh, Saeed

    2012-07-01

    Adherence to diet and fluids is the cornerstone of patients undergoing hemodialysis. By informing hemodialysis patients we can help them have a proper diet and reduce mortality and complications of toxins. Face to face education is one of the most common methods of training in health care system. But advantages of video- based education are being simple and cost-effective, although this method is virtual. Seventy-five hemodialysis patients were divided randomly into face to face and video-based education groups. A training manual was designed based on Orem's self-care model. Content of training manual was same in both the groups. In the face to face group, 2 educational sessions were accomplished during dialysis with a 1-week time interval. In the video-based education group, a produced film, separated to 2 episodes was presented during dialysis with a 1-week time interval. An Attitude questionnaire was completed as a pretest and at the end of weeks 2 and 4. SPSS software version 11.5 was used for analysis. Attitudes about fluid and diet adherence at the end of weeks 2 and 4 are not significantly different in face to face or video-based education groups. The patients' attitude had a significant difference in face to face group between the 3 study phases (pre-, 2, and 4 weeks postintervention). The same results were obtained in 3 phases of video-based education group. Our findings showed that video-based education could be as effective as face to face method. It is recommended that more investment be devoted to video-based education.

  18. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  19. Medical Student and Tutor Perceptions of Video Versus Text in an Interactive Online Virtual Patient for Problem-Based Learning: A Pilot Study.

    PubMed

    Woodham, Luke A; Ellaway, Rachel H; Round, Jonathan; Vaughan, Sophie; Poulton, Terry; Zary, Nabil

    2015-06-18

    The impact of the use of video resources in primarily paper-based problem-based learning (PBL) settings has been widely explored. Although it can provide many benefits, the use of video can also hamper the critical thinking of learners in contexts where learners are developing clinical reasoning. However, the use of video has not been explored in the context of interactive virtual patients for PBL. A pilot study was conducted to explore how undergraduate medical students interpreted and evaluated information from video- and text-based materials presented in the context of a branched interactive online virtual patient designed for PBL. The goal was to inform the development and use of virtual patients for PBL and to inform future research in this area. An existing virtual patient for PBL was adapted for use in video and provided as an intervention to students in the transition year of the undergraduate medicine course at St George's, University of London. Survey instruments were used to capture student and PBL tutor experiences and perceptions of the intervention, and a formative review meeting was run with PBL tutors. Descriptive statistics were generated for the structured responses and a thematic analysis was used to identify emergent themes in the unstructured responses. Analysis of student responses (n=119) and tutor comments (n=18) yielded 8 distinct themes relating to the perceived educational efficacy of information presented in video and text formats in a PBL context. Although some students found some characteristics of the videos beneficial, when asked to express a preference for video or text the majority of those that responded to the question (65%, 65/100) expressed a preference for text. Student responses indicated that the use of video slowed the pace of PBL and impeded students' ability to review and critically appraise the presented information. Our findings suggest that text was perceived to be a better source of information than video in virtual patients for PBL. More specifically, the use of video was perceived as beneficial for providing details, visual information, and context where text was unable to do so. However, learner acceptance of text was higher in the context of PBL, particularly when targeting clinical reasoning skills. This pilot study has provided the foundation for further research into the effectiveness of different virtual patient designs for PBL.

  20. HealthTrust: a social network approach for retrieving online health videos.

    PubMed

    Fernandez-Luque, Luis; Karlsen, Randi; Melton, Genevieve B

    2012-01-31

    Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust's filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r₁₀ = .65, P = .02) and a trend toward significance with health consumers (r₇ = .65, P = .06) with videos on hemoglobinA(1c), but it did not perform as well with diabetic foot videos. The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities.

  1. Encryption for confidentiality of the network and influence of this to the quality of streaming video through network

    NASA Astrophysics Data System (ADS)

    Sevcik, L.; Uhrin, D.; Frnda, J.; Voznak, M.; Toral-Cruz, Homer; Mikulec, M.; Jakovlev, Sergej

    2015-05-01

    Nowadays, the interest in real-time services, like audio and video, is growing. These services are mostly transmitted over packet networks, which are based on IP protocol. It leads to analyses of these services and their behavior in such networks which are becoming more frequent. Video has become the significant part of all data traffic sent via IP networks. In general, a video service is one-way service (except e.g. video calls) and network delay is not such an important factor as in a voice service. Dominant network factors that influence the final video quality are especially packet loss, delay variation and the capacity of the transmission links. Analysis of video quality concentrates on the resistance of video codecs to packet loss in the network, which causes artefacts in the video. IPsec provides confidentiality in terms of safety, integrity and non-repudiation (using HMAC-SHA1 and 3DES encryption for confidentiality and AES in CBC mode) with an authentication header and ESP (Encapsulating Security Payload). The paper brings a detailed view of the performance of video streaming over an IP-based network. We compared quality of video with packet loss and encryption as well. The measured results demonstrated the relation between the video codec type and bitrate to the final video quality.

  2. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less

  3. Using Image Analysis to Explore Changes In Bacterial Mat Coverage at the Base of a Hydrothermal Vent within the Caldera of Axial Seamount

    NASA Astrophysics Data System (ADS)

    Knuth, F.; Crone, T. J.; Marburg, A.

    2017-12-01

    The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.

  4. Mechanisms and situations of anterior cruciate ligament injuries in professional male soccer players: a YouTube-based video analysis.

    PubMed

    Grassi, Alberto; Smiley, Stephen Paul; Roberti di Sarsina, Tommaso; Signorelli, Cecilia; Marcheggiani Muccioli, Giulio Maria; Bondi, Alice; Romagnoli, Matteo; Agostini, Alessandra; Zaffagnini, Stefano

    2017-10-01

    Soccer is considered the most popular sport in the world concerning both audience and athlete participation, and the incidence of ACL injury in this sport is high. The understanding of injury situations and mechanisms could be useful as substratum for preventive actions. To conduct a video analysis evaluating the situations and mechanisms of ACL injury in a homogeneous population of professional male soccer players, through a search entirely performed on the YouTube.com Web site focusing on the most recent years. A video analysis was conducted obtaining videos of ACL injury in professional male soccer players from the Web site YouTube. Details regarding injured players, events and situations were obtained. The mechanism of injury was defined on the basis of the action, duel type, contact or non-contact injury, and on the hip, knee and foot position. Thirty-four videos were analyzed, mostly from the 2014-2015 season. Injuries occurred mostly in the first 9 min of the match (26%), in the penalty area (32%) or near the side-lines (44%), and in non-rainy conditions (97%). Non-contact injuries occurred in 44% of cases, while indirect injuries occurred in 65%, mostly during pressing, dribbling or tackling. The most recurrent mechanism was with an abducted and flexed hip, with knee at first degrees of flexion and under valgus stress. Through a YouTube-based video analysis, it was possible to delineate recurrent temporal, spatial and mechanical characteristics of ACL injury in male professional soccer players. Level IV, case series.

  5. Is knee pain information on YouTube videos perceived to be helpful? An analysis of user comments and implications for dissemination on social media.

    PubMed

    Meldrum, Sarah; Savarimuthu, Bastin Tr; Licorish, Sherlock; Tahir, Amjed; Bosu, Michael; Jayakaran, Prasath

    2017-01-01

    There is little research that characterises knee pain related information disseminated via social media. However, variances in the content and quality of such sources could compromise optimal patient care. This study explored the nature of the comments on YouTube videos related to non-specific knee pain, to determine their helpfulness to the users. A systematic search identified 900 videos related to knee pain on the YouTube database. A total of 3537 comments from 58 videos were included in the study. A categorisation scheme was developed and 1000 randomly selected comments were analysed according to this scheme. The most common category was the users providing personal information or describing a personal situation (19%), followed by appreciation or acknowledgement of others' inputs (17%) and asking questions (15%). Of the questions, 33% were related to seeking help in relation to a specific situation. Over 10% of the comments contained negativity or disagreement; while 4.4% of comments reported they intended to pursue an action, based on the information presented in the video and/or from user comments. It was observed that individuals commenting on YouTube videos on knee pain were most often soliciting advice and information specific to their condition. The analysis of comments from the most commented videos using a keyword-based search approach suggests that the YouTube videos can be used for disseminating general advice on knee pain.

  6. Is knee pain information on YouTube videos perceived to be helpful? An analysis of user comments and implications for dissemination on social media

    PubMed Central

    Meldrum, Sarah; Savarimuthu, Bastin TR; Licorish, Sherlock; Tahir, Amjed; Bosu, Michael; Jayakaran, Prasath

    2017-01-01

    Objective There is little research that characterises knee pain related information disseminated via social media. However, variances in the content and quality of such sources could compromise optimal patient care. This study explored the nature of the comments on YouTube videos related to non-specific knee pain, to determine their helpfulness to the users. Methods A systematic search identified 900 videos related to knee pain on the YouTube database. A total of 3537 comments from 58 videos were included in the study. A categorisation scheme was developed and 1000 randomly selected comments were analysed according to this scheme. Results The most common category was the users providing personal information or describing a personal situation (19%), followed by appreciation or acknowledgement of others’ inputs (17%) and asking questions (15%). Of the questions, 33% were related to seeking help in relation to a specific situation. Over 10% of the comments contained negativity or disagreement; while 4.4% of comments reported they intended to pursue an action, based on the information presented in the video and/or from user comments. Conclusion It was observed that individuals commenting on YouTube videos on knee pain were most often soliciting advice and information specific to their condition. The analysis of comments from the most commented videos using a keyword-based search approach suggests that the YouTube videos can be used for disseminating general advice on knee pain. PMID:29942583

  7. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  8. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  9. Socio-phenomenology and conversation analysis: interpreting video lifeworld healthcare interactions.

    PubMed

    Bickerton, Jane; Procter, Sue; Johnson, Barbara; Medina, Angel

    2011-10-01

    This article uses a socio-phenomenological methodology to develop knowledge and understanding of the healthcare consultation based on the concept of the lifeworld. It concentrates its attention on social action rather than strategic action and a systems approach. This article argues that patient-centred care is more effective when it is informed through a lifeworld conception of human mutual shared interaction. Videos offer an opportunity for a wide audience to experience the many kinds of conversations and dynamics that take place in consultations. Visual sociology used in this article provides a method to organize video emotional, knowledge and action conversations as well as dynamic typical consultation situations. These interactions are experienced through the video materials themselves unlike conversation analysis where video materials are first transcribed and then analysed. Both approaches have the potential to support intersubjective learning but this article argues that a video lifeworld schema is more accessible to health professionals and the general public. The typical interaction situations are constructed through the analysis of video materials of consultations in a London walk-in centre. Further studies are planned in the future to extend and replicate results in other healthcare services. This method of analysis focuses on the ways in which the everyday lifeworld informs face-to-face person-centred health care and supports social action as a significant factor underpinning strategic action and a systems approach to consultation practice. © 2011 Blackwell Publishing Ltd.

  10. Physics and Video Analysis

    NASA Astrophysics Data System (ADS)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  11. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    NASA Astrophysics Data System (ADS)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  12. Chaos based video encryption using maps and Ikeda time delay system

    NASA Astrophysics Data System (ADS)

    Valli, D.; Ganesan, K.

    2017-12-01

    Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.

  13. Economic evaluation of home-based telebehavioural health care compared to in-person treatment delivery for depression.

    PubMed

    Bounthavong, Mark; Pruitt, Larry D; Smolenski, Derek J; Gahm, Gregory A; Bansal, Aasthaa; Hansen, Ryan N

    2018-02-01

    Introduction Home-based telebehavioural healthcare improves access to mental health care for patients restricted by travel burden. However, there is limited evidence assessing the economic value of home-based telebehavioural health care compared to in-person care. We sought to compare the economic impact of home-based telebehavioural health care and in-person care for depression among current and former US service members. Methods We performed trial-based cost-minimisation and cost-utility analyses to assess the economic impact of home-based telebehavioural health care versus in-person behavioural care for depression. Our analyses focused on the payer perspective (Department of Defense and Department of Veterans Affairs) at three months. We also performed a scenario analysis where all patients possessed video-conferencing technology that was approved by these agencies. The cost-utility analysis evaluated the impact of different depression categories on the incremental cost-effectiveness ratio. One-way and probabilistic sensitivity analyses were performed to test the robustness of the model assumptions. Results In the base case analysis the total direct cost of home-based telebehavioural health care was higher than in-person care (US$71,974 versus US$20,322). Assuming that patients possessed government-approved video-conferencing technology, home-based telebehavioural health care was less costly compared to in-person care (US$19,177 versus US$20,322). In one-way sensitivity analyses, the proportion of patients possessing personal computers was a major driver of direct costs. In the cost-utility analysis, home-based telebehavioural health care was dominant when patients possessed video-conferencing technology. Results from probabilistic sensitivity analyses did not differ substantially from base case results. Discussion Home-based telebehavioural health care is dependent on the cost of supplying video-conferencing technology to patients but offers the opportunity to increase access to care. Health-care policies centred on implementation of home-based telebehavioural health care should ensure that these technologies are able to be successfully deployed on patients' existing technology.

  14. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  15. Fire flame detection based on GICA and target tracking

    NASA Astrophysics Data System (ADS)

    Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian

    2013-04-01

    To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.

  16. A functional video-based anthropometric measuring system

    NASA Technical Reports Server (NTRS)

    Nixon, J. H.; Cater, J. P.

    1982-01-01

    A high-speed anthropometric three dimensional measurement system using the Selcom Selspot motion tracking instrument for visual data acquisition is discussed. A three-dimensional scanning system was created which collects video, audio, and performance data on a single standard video cassette recorder. Recording rates of 1 megabit per second for periods of up to two hours are possible with the system design. A high-speed off-the-shelf motion analysis system for collecting optical information as used. The video recording adapter (VRA) is interfaced to the Selspot data acquisition system.

  17. Social media for message testing: a multilevel approach to linking favorable viewer responses with message, producer, and viewer influence on YouTube.

    PubMed

    Paek, Hye-Jin; Hove, Thomas; Jeon, Jehoon

    2013-01-01

    To explore the feasibility of social media for message testing, this study connects favorable viewer responses to antismoking videos on YouTube with the videos' message characteristics (message sensation value [MSV] and appeals), producer types, and viewer influences (viewer rating and number of viewers). Through multilevel modeling, a content analysis of 7,561 viewer comments on antismoking videos is linked with a content analysis of 87 antismoking videos. Based on a cognitive response approach, viewer comments are classified and coded as message-oriented thought, video feature-relevant thought, and audience-generated thought. The three mixed logit models indicate that videos with a greater number of viewers consistently increased the odds of favorable viewer responses, while those presenting humor appeals decreased the odds of favorable message-oriented and audience-generated thoughts. Some significant interaction effects show that videos produced by laypeople may hinder favorable viewer responses, while a greater number of viewer comments can work jointly with videos presenting threat appeals to predict favorable viewer responses. Also, for a more accurate understanding of audience responses to the messages, nuance cues should be considered together with message features and viewer influences.

  18. A web-based video annotation system for crowdsourcing surveillance videos

    NASA Astrophysics Data System (ADS)

    Gadgil, Neeraj J.; Tahboub, Khalid; Kirsh, David; Delp, Edward J.

    2014-03-01

    Video surveillance systems are of a great value to prevent threats and identify/investigate criminal activities. Manual analysis of a huge amount of video data from several cameras over a long period of time often becomes impracticable. The use of automatic detection methods can be challenging when the video contains many objects with complex motion and occlusions. Crowdsourcing has been proposed as an effective method for utilizing human intelligence to perform several tasks. Our system provides a platform for the annotation of surveillance video in an organized and controlled way. One can monitor a surveillance system using a set of tools such as training modules, roles and labels, task management. This system can be used in a real-time streaming mode to detect any potential threats or as an investigative tool to analyze past events. Annotators can annotate video contents assigned to them for suspicious activity or criminal acts. First responders are then able to view the collective annotations and receive email alerts about a newly reported incident. They can also keep track of the annotators' training performance, manage their activities and reward their success. By providing this system, the process of video analysis is made more efficient.

  19. Video Analysis of a Plucked String: An Example of Problem-based Learning

    NASA Astrophysics Data System (ADS)

    Wentworth, Christopher D.; Buse, Eric

    2009-11-01

    Problem-based learning is a teaching methodology that grounds learning within the context of solving a real problem. Typically the problem initiates learning of concepts rather than simply being an application of the concept, and students take the lead in identifying what must be developed to solve the problem. Problem-based learning in upper-level physics courses can be challenging, because of the time and financial requirements necessary to generate real data. Here, we present a problem that motivates learning about partial differential equations and their solution in a mathematical methods for physics course. Students study a plucked elastic cord using high speed digital video. After creating video clips of the cord motion under different tensions they are asked to create a mathematical model. Ultimately, students develop and solve a model that includes damping effects that are clearly visible in the videos. The digital video files used in this project are available on the web at http://physics.doane.edu .

  20. Automated fall detection on privacy-enhanced video.

    PubMed

    Edgcomb, Alex; Vahid, Frank

    2012-01-01

    A privacy-enhanced video obscures the appearance of a person in the video. We consider four privacy enhancements: blurring of the person, silhouetting of the person, covering the person with a graphical box, and covering the person with a graphical oval. We demonstrate that an automated video-based fall detection algorithm can be as accurate on privacy-enhanced video as on raw video. The algorithm operated on video from a stationary in-home camera, using a foreground-background segmentation algorithm to extract a minimum bounding rectangle (MBR) around the motion in the video, and using time series shapelet analysis on the height and width of the rectangle to detect falls. We report accuracy applying fall detection on 23 scenarios depicted as raw video and privacy-enhanced videos involving a sole actor portraying normal activities and various falls. We found that fall detection on privacy-enhanced video, except for the common approach of blurring of the person, was competitive with raw video, and in particular that the graphical oval privacy enhancement yielded the same accuracy as raw video, namely 0.91 sensitivity and 0.92 specificity.

  1. Early prediction of cerebral palsy by computer-based video analysis of general movements: a feasibility study.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander R; Taraldsen, Gunnar; Grunewaldt, Kristine H; Støen, Ragnhild

    2010-08-01

    The aim of this study was to investigate the predictive value of a computer-based video analysis of the development of cerebral palsy (CP) in young infants. A prospective study of general movements used recordings from 30 high-risk infants (13 males, 17 females; mean gestational age 31wks, SD 6wks; range 23-42wks) between 10 and 15 weeks post term when fidgety movements should be present. Recordings were analysed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analyses. CP status was reported at 5 years. Thirteen infants developed CP (eight hemiparetic, four quadriparetic, one dyskinetic; seven ambulatory, three non-ambulatory, and three unknown function), of whom one had fidgety movements. Variability of the centroid of motion had a sensitivity of 85% and a specificity of 71% in identifying CP. By combining this with variables reflecting the amount of motion, specificity increased to 88%. Nine out of 10 children with CP, and for whom information about functional level was available, were correctly predicted with regard to ambulatory and non-ambulatory function. Prediction of CP can be provided by computer-based video analysis in young infants. The method may serve as an objective and feasible tool for early prediction of CP in high-risk infants.

  2. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  3. Considerations in video playback design: using optic flow analysis to examine motion characteristics of live and computer-generated animation sequences.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume

    2008-07-01

    The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.

  4. Eye gaze correction with stereovision for video-teleconferencing.

    PubMed

    Yang, Ruigang; Zhang, Zhengyou

    2004-07-01

    The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.

  5. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  6. Analysis of the YouTube videos on basic life support and cardiopulmonary resuscitation.

    PubMed

    Tourinho, Francis Solange Vieira; de Medeiros, Kleyton Santos; Salvador, Pétala Tuani Candido De Oliveira; Castro, Grayce Loyse Tinoco; Santos, Viviane Euzébia Pereira

    2012-01-01

    To analyze the videos on the YouTube video sharing site, noting which points addressed in the videos related to CPR and BLS, based on the 2010 Guidelines for the American Heart Association (AHA). This was an exploratory, quantitative and qualitative research performed in the YouTube sharing site, using as keywords the expressions in Portuguese equivalent to the Medical Subject Headings (MeSH) "Cardiopulmonary Resuscitation" and "Basic Life Support" for videos that focused on the basic life support. The research totaled 260 videos over the two searches. Following the exclusion criteria, 61 videos remained. These mostly are posted by individuals and belong to the category Education. Moreover, most of the videos, despite being added to the site after the publication of the 2010 AHA Guidelines, were under the older 2005 guidelines. Although the video-sharing site YouTube is widely used today, it lacks videos about CPR and BLS that comply to the most recent AHA recommendations, which may negatively influence the population that uses it.

  7. A Video Method to Study Drosophila Sleep

    PubMed Central

    Zimmerman, John E.; Raizen, David M.; Maycock, Matthew H.; Maislin, Greg; Pack, Allan I.

    2008-01-01

    Study Objectives: To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. Design: A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. Measurements and Results: The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Conclusions: Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep. Citation: Zimmerman JE; Raizen DM; Maycock MH; Maislin G; Pack AI. A video method to study drosophila sleep. SLEEP 2008;31(11):1587–1598. PMID:19014079

  8. Reference Model for Project Support Environments Version 1.0

    DTIC Science & Technology

    1993-02-28

    relationship with the framework’s Process Support services and with the Lifecycle Process Engineering services. Examples: "* ORCA (Object-based...Design services. Examples: "* ORCA (Object-based Requirements Capture and Analysis). "* RETRAC (REquirements TRACeability). 4.3 Life-Cycle Process...34traditional" computer tools. Operations: Examples of audio and video processing operations include: "* Create, modify, and delete sound and video data

  9. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  10. Does improved decision-making ability reduce the physiological demands of game-based activities in field sport athletes?

    PubMed

    Gabbett, Tim J; Carius, Josh; Mulvey, Mike

    2008-11-01

    This study investigated the effects of video-based perceptual training on pattern recognition and pattern prediction ability in elite field sport athletes and determined whether enhanced perceptual skills influenced the physiological demands of game-based activities. Sixteen elite women soccer players (mean +/- SD age, 18.3 +/- 2.8 years) were allocated to either a video-based perceptual training group (N = 8) or a control group (N = 8). The video-based perceptual training group watched video footage of international women's soccer matches. Twelve training sessions, each 15 minutes in duration, were conducted during a 4-week period. Players performed assessments of speed (5-, 10-, and 20-m sprint), repeated-sprint ability (6 x 20-m sprints, with active recovery on a 15-second cycle), estimated maximal aerobic power (V O2 max, multistage fitness test), and a game-specific video-based perceptual test of pattern recognition and pattern prediction before and after the 4 weeks of video-based perceptual training. The on-field assessments included time-motion analysis completed on all players during a standardized 45-minute small-sided training game, and assessments of passing, shooting, and dribbling decision-making ability. No significant changes were detected in speed, repeated-sprint ability, or estimated V O2 max during the training period. However, video-based perceptual training improved decision accuracy and reduced the number of recall errors, indicating improved game awareness and decision-making ability. Importantly, the improvements in pattern recognition and prediction ability transferred to on-field improvements in passing, shooting, and dribbling decision-making skills. No differences were detected between groups for the time spent standing, walking, jogging, striding, and sprinting during the small-sided training game. These findings demonstrate that video-based perceptual training can be used effectively to enhance the decision-making ability of field sport athletes; however, it has no effect on the physiological demands of game-based activities.

  11. 'By seeing with our own eyes, it can remain in our mind': qualitative evaluation findings suggest the ability of participatory video to reduce gender-based violence in conflict-affected settings.

    PubMed

    Gurman, Tilly A; Trappler, Regan M; Acosta, Angela; McCray, Pamella A; Cooper, Chelsea M; Goodsmith, Lauren

    2014-08-01

    Gender-based violence is pervasive and poses unique challenges in conflict-affected settings, with women and girls particularly vulnerable to its sequelae. Furthermore, widespread stigmatization of gender-based violence promotes silence among survivors and families, inhibiting access to services. Little evidence exists regarding effective gender-based violence prevention interventions in these settings. Through Our Eyes, a multi-year participatory video project, addressed gender-based violence by stimulating community dialogue and action in post-conflict settings in South Sudan, Uganda, Thailand, Liberia and Rwanda. The present qualitative analysis of project evaluation data included transcripts from 18 focus group discussions (n = 125) and key informant interviews (n = 76). Study participants included project team members, representatives from partner agencies, service providers and community members who either participated in video production or attended video screenings. Study findings revealed that the video project contributed to a growing awareness of women's rights and gender equality. The community dialogue helped to begin dismantling the culture of silence gender-based violence, encouraging survivors to access health and law enforcement services. Furthermore, both men and women reported attitude and behavioral changes related to topics such as wife beating, gender-based violence reporting and girls' education. Health education professionals should employ participatory video to address gender-based violence within conflict-affected settings. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  12. Short-term effects of prosocial video games on aggression: an event-related potential study

    PubMed Central

    Liu, Yanling; Teng, Zhaojun; Lan, Haiying; Zhang, Xin; Yao, Dezhong

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 min, then participated in an event-related potential (ERP) experiment based on an oddball paradigm and designed to test electrophysiological responses to prosocial and violent words. Finally, subjects completed a competitive reaction time task (CRTT) which based on Taylor's Aggression Paradigm and contains reaction time and noise intensity chosen as a measure of aggressive behavior. The results show that the prosocial video game group (compared to the neutral video game group) displayed smaller P300 amplitudes, were more accurate in distinguishing violent words, and were less aggressive as evaluated by the CRTT of noise intensity chosen. A mediation analysis shows that the P300 amplitude evoked by violent words partially mediates the relationship between type of video game and subsequent aggressive behavior. The results support theories based on the General Learning Model. We provide converging behavioral and neural evidence that exposure to prosocial media may reduce aggression. PMID:26257620

  13. Short-term effects of prosocial video games on aggression: an event-related potential study.

    PubMed

    Liu, Yanling; Teng, Zhaojun; Lan, Haiying; Zhang, Xin; Yao, Dezhong

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 min, then participated in an event-related potential (ERP) experiment based on an oddball paradigm and designed to test electrophysiological responses to prosocial and violent words. Finally, subjects completed a competitive reaction time task (CRTT) which based on Taylor's Aggression Paradigm and contains reaction time and noise intensity chosen as a measure of aggressive behavior. The results show that the prosocial video game group (compared to the neutral video game group) displayed smaller P300 amplitudes, were more accurate in distinguishing violent words, and were less aggressive as evaluated by the CRTT of noise intensity chosen. A mediation analysis shows that the P300 amplitude evoked by violent words partially mediates the relationship between type of video game and subsequent aggressive behavior. The results support theories based on the General Learning Model. We provide converging behavioral and neural evidence that exposure to prosocial media may reduce aggression.

  14. Automated Video Quality Assessment for Deep-Sea Video

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.

  15. Medical Student and Tutor Perceptions of Video Versus Text in an Interactive Online Virtual Patient for Problem-Based Learning: A Pilot Study

    PubMed Central

    Ellaway, Rachel H; Round, Jonathan; Vaughan, Sophie; Poulton, Terry; Zary, Nabil

    2015-01-01

    Background The impact of the use of video resources in primarily paper-based problem-based learning (PBL) settings has been widely explored. Although it can provide many benefits, the use of video can also hamper the critical thinking of learners in contexts where learners are developing clinical reasoning. However, the use of video has not been explored in the context of interactive virtual patients for PBL. Objective A pilot study was conducted to explore how undergraduate medical students interpreted and evaluated information from video- and text-based materials presented in the context of a branched interactive online virtual patient designed for PBL. The goal was to inform the development and use of virtual patients for PBL and to inform future research in this area. Methods An existing virtual patient for PBL was adapted for use in video and provided as an intervention to students in the transition year of the undergraduate medicine course at St George’s, University of London. Survey instruments were used to capture student and PBL tutor experiences and perceptions of the intervention, and a formative review meeting was run with PBL tutors. Descriptive statistics were generated for the structured responses and a thematic analysis was used to identify emergent themes in the unstructured responses. Results Analysis of student responses (n=119) and tutor comments (n=18) yielded 8 distinct themes relating to the perceived educational efficacy of information presented in video and text formats in a PBL context. Although some students found some characteristics of the videos beneficial, when asked to express a preference for video or text the majority of those that responded to the question (65%, 65/100) expressed a preference for text. Student responses indicated that the use of video slowed the pace of PBL and impeded students’ ability to review and critically appraise the presented information. Conclusions Our findings suggest that text was perceived to be a better source of information than video in virtual patients for PBL. More specifically, the use of video was perceived as beneficial for providing details, visual information, and context where text was unable to do so. However, learner acceptance of text was higher in the context of PBL, particularly when targeting clinical reasoning skills. This pilot study has provided the foundation for further research into the effectiveness of different virtual patient designs for PBL. PMID:26088435

  16. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  17. Development and application of traffic flow information collecting and analysis system based on multi-type video

    NASA Astrophysics Data System (ADS)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  18. HealthTrust: A Social Network Approach for Retrieving Online Health Videos

    PubMed Central

    Karlsen, Randi; Melton, Genevieve B

    2012-01-01

    Background Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. Objectives To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. Methods We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. Results HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust’s filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r 10 = .65, P = .02) and a trend toward significance with health consumers (r 7 = .65, P = .06) with videos on hemoglobinA1 c, but it did not perform as well with diabetic foot videos. Conclusions The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities. PMID:22356723

  19. Toward automating Hammersmith pulled-to-sit examination of infants using feature point based video object tracking.

    PubMed

    Dogra, Debi P; Majumdar, Arun K; Sural, Shamik; Mukherjee, Jayanta; Mukherjee, Suchandra; Singh, Arun

    2012-01-01

    Hammersmith Infant Neurological Examination (HINE) is a set of tests used for grading neurological development of infants on a scale of 0 to 3. These tests help in assessing neurophysiological development of babies, especially preterm infants who are born before (the fetus reaches) the gestational age of 36 weeks. Such tests are often conducted in the follow-up clinics of hospitals for grading infants with suspected disabilities. Assessment based on HINE depends on the expertise of the physicians involved in conducting the examinations. It has been noted that some of these tests, especially pulled-to-sit and lateral tilting, are difficult to assess solely based on visual observation. For example, during the pulled-to-sit examination, the examiner needs to observe the relative movement of the head with respect to torso while pulling the infant by holding wrists. The examiner may find it difficult to follow the head movement from the coronal view. Video object tracking based automatic or semi-automatic analysis can be helpful in this case. In this paper, we present a video based method to automate the analysis of pulled-to-sit examination. In this context, a dynamic programming and node pruning based efficient video object tracking algorithm has been proposed. Pulled-to-sit event detection is handled by the proposed tracking algorithm that uses a 2-D geometric model of the scene. The algorithm has been tested with normal as well as marker based videos of the examination recorded at the neuro-development clinic of the SSKM Hospital, Kolkata, India. It is found that the proposed algorithm is capable of estimating the pulled-to-sit score with sensitivity (80%-92%) and specificity (89%-96%).

  20. Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis.

    PubMed

    Derpanis, Konstantinos G; Sizintsev, Mikhail; Cannons, Kevin J; Wildes, Richard P

    2013-03-01

    This paper provides a unified framework for the interrelated topics of action spotting, the spatiotemporal detection and localization of human actions in video, and action recognition, the classification of a given video into one of several predefined categories. A novel compact local descriptor of video dynamics in the context of action spotting and recognition is introduced based on visual spacetime oriented energy measurements. This descriptor is efficiently computed directly from raw image intensity data and thereby forgoes the problems typically associated with flow-based features. Importantly, the descriptor allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. An associated similarity measure is introduced that admits efficient exhaustive search for an action template, derived from a single exemplar video, across candidate video sequences. The general approach presented for action spotting and recognition is amenable to efficient implementation, which is deemed critical for many important applications. For action spotting, details of a real-time GPU-based instantiation of the proposed approach are provided. Empirical evaluation of both action spotting and action recognition on challenging datasets suggests the efficacy of the proposed approach, with state-of-the-art performance documented on standard datasets.

  1. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  2. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE PAGES

    Giera, Brian; Bukosky, Scott; Lee, Elaine; ...

    2018-01-23

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  3. Quantitative Analysis of Color Differences within High Contrast, Low Power Reversible Electrophoretic Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giera, Brian; Bukosky, Scott; Lee, Elaine

    Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.

  4. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  5. Modification of the Miyake-Apple technique for simultaneous anterior and posterior video imaging of wet laboratory-based corneal surgery.

    PubMed

    Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory

    2014-03-01

    The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.

  6. Automatic generation of pictorial transcripts of video programs

    NASA Astrophysics Data System (ADS)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  7. An objective method for a video quality evaluation in a 3DTV service

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2015-09-01

    The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.

  8. A natural approach to convey numerical digits using hand activity recognition based on hand shape features

    NASA Astrophysics Data System (ADS)

    Chidananda, H.; Reddy, T. Hanumantha

    2017-06-01

    This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.

  9. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.

  10. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE PAGES

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    2015-09-11

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  11. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  12. Use of Videos Improves Informed Consent Comprehension in Web-Based Surveys Among Internet-Using Men Who Have Sex With Men: A Randomized Controlled Trial.

    PubMed

    Hall, Eric William; Sanchez, Travis H; Stein, Aryeh D; Stephenson, Rob; Zlotorzynska, Maria; Sineath, Robert Craig; Sullivan, Patrick S

    2017-03-06

    Web-based surveys are increasingly used to capture data essential for human immunodeficiency virus (HIV) prevention research. However, there are challenges in ensuring the informed consent of Web-based research participants. The aim of our study was to develop and assess the efficacy of alternative methods of administering informed consent in Web-based HIV research with men who have sex with men (MSM). From July to September 2014, paid advertisements on Facebook were used to recruit adult MSM living in the United States for a Web-based survey about risk and preventive behaviors. Participants were randomized to one of the 4 methods of delivering informed consent: a professionally produced video, a study staff-produced video, a frequently asked questions (FAQs) text page, and a standard informed consent text page. Following the behavior survey, participants answered 15 questions about comprehension of consent information. Correct responses to each question were given a score of 1, for a total possible scale score of 15. General linear regression and post-hoc Tukey comparisons were used to assess difference (P<.001) in mean consent comprehension scores. A mediation analysis was used to examine the relationship between time spent on consent page and consent comprehension. Of the 665 MSM participants who completed the comprehension questions, 24.2% (161/665) received the standard consent, 27.1% (180/665) received the FAQ consent, 26.8% (178/665) received the professional consent video, and 22.0% (146/665) received the staff video. The overall average consent comprehension score was 6.28 (SD=2.89). The average consent comprehension score differed significantly across consent type (P<.001), age (P=.04), race or ethnicity (P<.001), and highest level of education (P=.001). Compared with those who received the standard consent, comprehension was significantly higher for participants who received the professional video consent (score increase=1.79; 95% CI 1.02-2.55) and participants who received the staff video consent (score increase=1.79; 95% CI 0.99-2.59). There was no significant difference in comprehension for those who received the FAQ consent. Participants spent more time on the 2 video consents (staff video median time=117 seconds; professional video median time=115 seconds) than the FAQ (median=21 seconds) and standard consents (median=37 seconds). Mediation analysis showed that though time spent on the consent page was partially responsible for some of the differences in comprehension, the direct effects of the professional video (score increase=0.93; 95% CI 0.39-1.48) and the staff-produced video (score increase=0.99; 95% CI 0.42-1.56) were still significant. Video-based consent methods improve consent comprehension of MSM participating in a Web-based HIV behavioral survey. This effect may be partially mediated through increased time spent reviewing the consent material; however, the video consent may still be superior to standard consent in improving participant comprehension of key study facts. Clinicaltrials.gov NCT02139566; https://clinicaltrials.gov/ct2/show/NCT02139566 (Archived by WebCite at http://www.webcitation.org/6oRnL261N). ©Eric William Hall, Travis H Sanchez, Aryeh D Stein, Rob Stephenson, Maria Zlotorzynska, Robert Craig Sineath, Patrick S Sullivan. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.03.2017.

  13. Use of Videos Improves Informed Consent Comprehension in Web-Based Surveys Among Internet-Using Men Who Have Sex With Men: A Randomized Controlled Trial

    PubMed Central

    Sanchez, Travis H; Stein, Aryeh D; Stephenson, Rob; Zlotorzynska, Maria; Sineath, Robert Craig; Sullivan, Patrick S

    2017-01-01

    Background Web-based surveys are increasingly used to capture data essential for human immunodeficiency virus (HIV) prevention research. However, there are challenges in ensuring the informed consent of Web-based research participants. Objective The aim of our study was to develop and assess the efficacy of alternative methods of administering informed consent in Web-based HIV research with men who have sex with men (MSM). Methods From July to September 2014, paid advertisements on Facebook were used to recruit adult MSM living in the United States for a Web-based survey about risk and preventive behaviors. Participants were randomized to one of the 4 methods of delivering informed consent: a professionally produced video, a study staff-produced video, a frequently asked questions (FAQs) text page, and a standard informed consent text page. Following the behavior survey, participants answered 15 questions about comprehension of consent information. Correct responses to each question were given a score of 1, for a total possible scale score of 15. General linear regression and post-hoc Tukey comparisons were used to assess difference (P<.001) in mean consent comprehension scores. A mediation analysis was used to examine the relationship between time spent on consent page and consent comprehension. Results Of the 665 MSM participants who completed the comprehension questions, 24.2% (161/665) received the standard consent, 27.1% (180/665) received the FAQ consent, 26.8% (178/665) received the professional consent video, and 22.0% (146/665) received the staff video. The overall average consent comprehension score was 6.28 (SD=2.89). The average consent comprehension score differed significantly across consent type (P<.001), age (P=.04), race or ethnicity (P<.001), and highest level of education (P=.001). Compared with those who received the standard consent, comprehension was significantly higher for participants who received the professional video consent (score increase=1.79; 95% CI 1.02-2.55) and participants who received the staff video consent (score increase=1.79; 95% CI 0.99-2.59). There was no significant difference in comprehension for those who received the FAQ consent. Participants spent more time on the 2 video consents (staff video median time=117 seconds; professional video median time=115 seconds) than the FAQ (median=21 seconds) and standard consents (median=37 seconds). Mediation analysis showed that though time spent on the consent page was partially responsible for some of the differences in comprehension, the direct effects of the professional video (score increase=0.93; 95% CI 0.39-1.48) and the staff-produced video (score increase=0.99; 95% CI 0.42-1.56) were still significant. Conclusions Video-based consent methods improve consent comprehension of MSM participating in a Web-based HIV behavioral survey. This effect may be partially mediated through increased time spent reviewing the consent material; however, the video consent may still be superior to standard consent in improving participant comprehension of key study facts. Trail Registration Clinicaltrials.gov NCT02139566; https://clinicaltrials.gov/ct2/show/NCT02139566 (Archived by WebCite at http://www.webcitation.org/6oRnL261N). PMID:28264794

  14. What Do Social Media Say About Makeovers? A Content Analysis of Cosmetic Surgery Videos and Viewers' Responses on YouTube.

    PubMed

    Wen, Nainan; Chia, Stella C; Hao, Xiaoming

    2015-01-01

    This study examines portrayals of cosmetic surgery on YouTube, where we found a substantial number of cosmetic surgery videos. Most of the videos came from cosmetic surgeons who appeared to be aggressively using social media in their practices. Except for videos that explained cosmetic surgery procedures, most videos in our sample emphasized the benefits of cosmetic surgery, and only a small number of the videos addressed the involved risks. We also found that tactics of persuasive communication-namely, related to message source and message sensation value (MSV)-have been used in Web-based social media to attract viewers' attention and interests. Expert sources were used predominantly, although typical-consumer sources tended to generate greater viewer interest in cosmetic surgery than other types of message sources. High MSV, moreover, was found to increase a video's popularity.

  15. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  16. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  17. Quantifying technical skills during open operations using video-based motion analysis.

    PubMed

    Glarner, Carly E; Hu, Yue-Yung; Chen, Chia-Hsiung; Radwin, Robert G; Zhao, Qianqian; Craven, Mark W; Wiegmann, Douglas A; Pugh, Carla M; Carty, Matthew J; Greenberg, Caprice C

    2014-09-01

    Objective quantification of technical operative skills in surgery remains poorly defined, although the delivery of and training in these skills is essential to the profession of surgery. Attempts to measure hand kinematics to quantify operative performance primarily have relied on electromagnetic sensors attached to the surgeon's hand or instrument. We sought to determine whether a similar motion analysis could be performed with a marker-less, video-based review, allowing for a scalable approach to performance evaluation. We recorded six reduction mammoplasty operations-a plastic surgery procedure in which the attending and resident surgeons operate in parallel. Segments representative of surgical tasks were identified with Multimedia Video Task Analysis software. Video digital processing was used to extract and analyze the spatiotemporal characteristics of hand movement. Attending plastic surgeons appear to use their nondominant hand more than residents when cutting with the scalpel, suggesting more use of countertraction. While suturing, attendings were more ambidextrous, with smaller differences in movement between their dominant and nondominant hands than residents. Attendings also seem to have more conservation of movement when performing instrument tying than residents, as demonstrated by less nondominant hand displacement. These observations were consistent within procedures and between the different attending plastic surgeons evaluated in this fashion. Video motion analysis can be used to provide objective measurement of technical skills without the need for sensors or markers. Such data could be valuable in better understanding the acquisition and degradation of operative skills, providing enhanced feedback to shorten the learning curve. Copyright © 2014 Mosby, Inc. All rights reserved.

  18. Using Videos Derived from Simulations to Support the Analysis of Spatial Awareness in Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.

    2006-01-01

    The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.

  19. Analysis of Video-Based Training Approaches and Professional Development

    ERIC Educational Resources Information Center

    Leblanc, Serge

    2018-01-01

    The use of videos to analyze teaching practices or initial teacher training is aimed at helping build professional skills by establishing more explicit links between university education and internships and practical work in the schools. The purpose of this article is to familiarize the English-speaking community with French research via a study…

  20. Neurophysiologic Analysis of the Effects of Interactive Tailored Health Videos on Attention to Health Messages

    ERIC Educational Resources Information Center

    Lee, Jung A.

    2011-01-01

    Web-based tailored approaches hold much promise as effective means for delivering health education and improving public health. This study examines the effects of interactive tailored health videos on attention to health messages using neurophysiological changes measured by Electroencephalogram (EEG) and Electrocardiogram (EKG). Sixty-eight…

  1. Effects of viewing an evidence-based video decision aid on patients' treatment preferences for spine surgery.

    PubMed

    Lurie, Jon D; Spratt, Kevin F; Blood, Emily A; Tosteson, Tor D; Tosteson, Anna N A; Weinstein, James N

    2011-08-15

    Secondary analysis within a large clinical trial. To evaluate the changes in treatment preference before and after watching a video decision aid as part of an informed consent process. A randomized trial with a similar decision aid in herniated disc patients had shown decreased rate of surgery in the video group, but the effect of the video on expressed preferences is not known. Subjects enrolling in the Spine Patient Outcomes Research Trial (SPORT) with intervertebral disc herniation, spinal stenosis, or degenerative spondylolisthesis at 13 multidisciplinary spine centers across the United States were given an evidence-based videotape decision aid viewed prior to enrollment as part of informed consent. Of the 2505 patients, 86% (n = 2151) watched the video and 14% (n = 354) did not. Watchers shifted their preference more often than nonwatchers (37.9% vs. 20.8%, P < 0.0001) and more often demonstrated a strengthened preference (26.2% vs. 11.1%, P < 0.0001). Among the 806 patients whose preference shifted after watching the video, 55% shifted toward surgery (P = 0.003). Among the 617 who started with no preference, after the video 27% preferred nonoperative care, 22% preferred surgery, and 51% remained uncertain. After watching the evidence-based patient decision aid (video) used in SPORT, patients with specific lumbar spine disorders formed and/or strengthened their treatment preferences in a balanced way that did not appear biased toward or away from surgery.

  2. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.

  3. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  4. Packetized Video On MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1987-07-01

    Theoretical analysis of integrated local area network model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up during video and voice calls during periods of little movement in the images and periods of silence in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamicaly controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real-time multimedia workstation EDDY, which integrates video, voice, and data traffic flows. Protocols supporting variable-bandwidth, fixed-quality packetized video transport are described in detail.

  5. Packetized video on MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1986-11-01

    Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.

  6. Watermarking textures in video games

    NASA Astrophysics Data System (ADS)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  7. Development and evaluation of an audiovisual information resource to promote self-management of chemotherapy side-effects.

    PubMed

    Carey, Mariko; Jefford, Michael; Schofield, Penelope; Kelly, Siobhan; Krishnasamy, Meinir; Aranda, Sanchia

    2006-04-01

    Based on a theoretical framework, we developed an audiovisual resource to promote self-management of eight common chemotherapy side-effects. A patient needs analysis identified content domains, best evidence for preparing patients for threatening medical procedures and a systematic review of effective self-care strategies informed script content. Patients and health professionals were invited to complete a written evaluation of the video. A 25-min video was produced. Fifty health professionals and 37 patients completed the evaluation. All considered the video informative and easy to understand. The majority believed the video would reduce anxiety and help patients prepare for chemotherapy. Underpinned by a robust theoretical framework, we have developed an evidence-based resource that is perceived by both patients and health professionals as likely to enhance preparedness for chemotherapy.

  8. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  9. Markerless video analysis for movement quantification in pediatric epilepsy monitoring.

    PubMed

    Lu, Haiping; Eng, How-Lung; Mandal, Bappaditya; Chan, Derrick W S; Ng, Yen-Ling

    2011-01-01

    This paper proposes a markerless video analytic system for quantifying body part movements in pediatric epilepsy monitoring. The system utilizes colored pajamas worn by a patient in bed to extract body part movement trajectories, from which various features can be obtained for seizure detection and analysis. Hence, it is non-intrusive and it requires no sensor/marker to be attached to the patient's body. It takes raw video sequences as input and a simple user-initialization indicates the body parts to be examined. In background/foreground modeling, Gaussian mixture models are employed in conjunction with HSV-based modeling. Body part detection follows a coarse-to-fine paradigm with graph-cut-based segmentation. Finally, body part parameters are estimated with domain knowledge guidance. Experimental studies are reported on sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.

  10. The comparison and analysis of extracting video key frame

    NASA Astrophysics Data System (ADS)

    Ouyang, S. Z.; Zhong, L.; Luo, R. Q.

    2018-05-01

    Video key frame extraction is an important part of the large data processing. Based on the previous work in key frame extraction, we summarized four important key frame extraction algorithms, and these methods are largely developed by comparing the differences between each of two frames. If the difference exceeds a threshold value, take the corresponding frame as two different keyframes. After the research, the key frame extraction based on the amount of mutual trust is proposed, the introduction of information entropy, by selecting the appropriate threshold values into the initial class, and finally take a similar mean mutual information as a candidate key frame. On this paper, several algorithms is used to extract the key frame of tunnel traffic videos. Then, with the analysis to the experimental results and comparisons between the pros and cons of these algorithms, the basis of practical applications is well provided.

  11. Astrometric and Photometric Analysis of the September 2008 ATV-1 Re-Entry Event

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Barker, Edwin S.; Maley, Paul D.; Beaulieu, Kevin R.; Stokely, Christopher L.

    2008-01-01

    NASA utilized Image Intensified Video Cameras for ATV data acquisition from a jet flying at 12.8 km. Afterwards the video was digitized and then analyzed with a modified commercial software package, Image Systems Trackeye. Astrometric results were limited by saturation, plate scale, and imposed linear plate solution based on field reference stars. Time-dependent fragment angular trajectories, velocities, accelerations, and luminosities were derived in each video segment. It was evident that individual fragments behave differently. Photometric accuracy was insufficient to confidently assess correlations between luminosity and fragment spatial behavior (velocity, deceleration). Use of high resolution digital video cameras in future should remedy this shortcoming.

  12. Tackling action-based video abstraction of animated movies for video browsing

    NASA Astrophysics Data System (ADS)

    Ionescu, Bogdan; Ott, Laurent; Lambert, Patrick; Coquin, Didier; Pacureanu, Alexandra; Buzuloiu, Vasile

    2010-07-01

    We address the issue of producing automatic video abstracts in the context of the video indexing of animated movies. For a quick browse of a movie's visual content, we propose a storyboard-like summary, which follows the movie's events by retaining one key frame for each specific scene. To capture the shot's visual activity, we use histograms of cumulative interframe distances, and the key frames are selected according to the distribution of the histogram's modes. For a preview of the movie's exciting action parts, we propose a trailer-like video highlight, whose aim is to show only the most interesting parts of the movie. Our method is based on a relatively standard approach, i.e., highlighting action through the analysis of the movie's rhythm and visual activity information. To suit every type of movie content, including predominantly static movies or movies without exciting parts, the concept of action depends on the movie's average rhythm. The efficiency of our approach is confirmed through several end-user studies.

  13. Study protocol for a framework analysis using video review to identify latent safety threats: trauma resuscitation using in situ simulation team training (TRUST)

    PubMed Central

    Petrosoniak, Andrew; Pinkney, Sonia; Hicks, Christopher; White, Kari; Almeida, Ana Paula Siquiera Silva; Campbell, Douglas; McGowan, Melissa; Gray, Alice; Trbovich, Patricia

    2016-01-01

    Introduction Errors in trauma resuscitation are common and have been attributed to breakdowns in the coordination of system elements (eg, tools/technology, physical environment and layout, individual skills/knowledge, team interaction). These breakdowns are triggered by unique circumstances and may go unrecognised by trauma team members or hospital administrators; they can be described as latent safety threats (LSTs). Retrospective approaches to identifying LSTs (ie, after they occur) are likely to be incomplete and prone to bias. To date, prospective studies have not used video review as the primary mechanism to identify any and all LSTs in trauma resuscitation. Methods and analysis A series of 12 unannounced in situ simulations (ISS) will be conducted to prospectively identify LSTs at a level 1 Canadian trauma centre (over 800 dedicated trauma team activations annually). 4 scenarios have already been designed as part of this protocol based on 5 recurring themes found in the hospital's mortality and morbidity process. The actual trauma team will be activated to participate in the study. Each simulation will be audio/video recorded from 4 different camera angles and transcribed to conduct a framework analysis. Video reviewers will code the videos deductively based on a priori themes of LSTs identified from the literature, and/or inductively based on the events occurring in the simulation. LSTs will be prioritised to target interventions in future work. Ethics and dissemination Institutional research ethics approval has been acquired (SMH REB #15-046). Results will be published in peer-reviewed journals and presented at relevant conferences. Findings will also be presented to key institutional stakeholders to inform mitigation strategies for improved patient safety. PMID:27821600

  14. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  15. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    PubMed Central

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  16. Teaching Workflow Analysis and Lean Thinking via Simulation: A Formative Evaluation

    PubMed Central

    Campbell, Robert James; Gantt, Laura; Congdon, Tamara

    2009-01-01

    This article presents the rationale for the design and development of a video simulation used to teach lean thinking and workflow analysis to health services and health information management students enrolled in a course on the management of health information. The discussion includes a description of the design process, a brief history of the use of simulation in healthcare, and an explanation of how video simulation can be used to generate experiential learning environments. Based on the results of a survey given to 75 students as part of a formative evaluation, the video simulation was judged effective because it allowed students to visualize a real-world process (concrete experience), contemplate the scenes depicted in the video along with the concepts presented in class in a risk-free environment (reflection), develop hypotheses about why problems occurred in the workflow process (abstract conceptualization), and develop solutions to redesign a selected process (active experimentation). PMID:19412533

  17. Teaching and Learning against All Odds: A Video-Based Study of Learner-to-Instructor Interaction in International Distance Education

    ERIC Educational Resources Information Center

    Muhirwa, Jean-Marie

    2009-01-01

    Distance education and information and communication technologies (ICTs) have been marketed as cost-effective ways to rescue struggling educational institutions in developing countries, particularly in sub-Saharan Africa (SSA). This study uses classroom video analysis and follow-up interviews with teachers, students, and local tutors to analyse…

  18. Anthropology, Participation, and the Democratization of Knowledge: Participatory Research Using Video with Youth Living in Extreme Poverty

    ERIC Educational Resources Information Center

    Batallan, Graciela; Dente, Liliana; Ritta, Loreley

    2017-01-01

    This article aims to open up a debate on methodological aspects of ethnographic research, arguing for the legitimacy of the information produced in a research "taller" or workshop using a participatory methodology and video production as a methodological tool. Based on the theoretical foundations and analysis of a "taller"…

  19. The Effect of Video-Based Approach on Prospective Teachers' Ability to Analyze Mathematics Teaching

    ERIC Educational Resources Information Center

    Alsawaie, Othman N.; Alghazo, Iman M.

    2010-01-01

    This is an intervention study that explored the effect of using video lesson analysis methodology (VLAM) on the ability of prospective middle/high school mathematics teachers to analyze mathematics teaching. The sample of the study consisted of 26 female prospective mathematics teachers enrolled in a methods course at the United Arab Emirates…

  20. A Comparison of Laser and Video Techniques for Determining Displacement and Velocity during Running

    ERIC Educational Resources Information Center

    Harrison, Andrew J.; Jensen, Randall L.; Donoghue, Orna

    2005-01-01

    The reliability of a laser system was compared with the reliability of a video-based kinematic analysis in measuring displacement and velocity during running. Validity and reliability of the laser on static measures was also assessed at distances between 10 m and 70 m by evaluating the coefficient of variation and intraclass correlation…

  1. Promoting Collaborative Practice and Reciprocity in Initial Teacher Education: Realising a "Dialogic Space" through Video Capture Analysis

    ERIC Educational Resources Information Center

    Youens, Bernadette; Smethem, Lindsey; Sullivan, Stefanie

    2014-01-01

    This paper explores the potential of video capture to generate a collaborative space for teacher preparation; a space in which traditional hierarchies and boundaries between actors (student teacher, school mentor and university tutor) and knowledge (academic, professional and practical) are disrupted. The study, based in a teacher education…

  2. The Conversational Framework and the ISE "Basketball Shot" Video Analysis Activity

    ERIC Educational Resources Information Center

    English, Vincent; Crotty, Yvonne; Farren, Margaret

    2015-01-01

    Inspiring Science Education (ISE) (http://www.inspiringscience.eu/) is an EU funded initiative that seeks to further the use of inquiry-based science learning (IBSL) through the medium of ICT in the classroom. The Basketball Shot is a scenario (lesson plan) that involves the use of video capture to help the student investigate the concepts of…

  3. Leveraging Analysis of Students' Disciplinary Thinking in a Video Club to Promote Student-Centered Science Instruction

    ERIC Educational Resources Information Center

    Barnhart, Tara; van Es, Elizabeth

    2018-01-01

    Recent policy reports and standards documents advocate for science teachers to adopt more student-centered instructional practices. Four secondary science teachers from one school district participated in a semester-long video club focused on honing attention to students' evidence-based reasoning and creating opportunities to make students'…

  4. Close to real-time robust pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Lipetski, Y.; Loibner, G.; Sidla, O.

    2015-03-01

    Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.

  5. [Video-based self-control in surgical teaching. A new tool in a new concept].

    PubMed

    Dahmen, U; Sänger, C; Wurst, C; Arlt, J; Wei, W; Dondorf, F; Richter, B; Settmacher, U; Dirsch, O

    2013-10-01

    Image and video-based results and process control are essential tools of a new teaching concept for conveying surgical skills. The new teaching concept integrates approved teaching principles and new media. Every performance of exercises is videotaped and the result photographically recorded. The quality of the process and result becomes accessible for an analysis by the teacher and the student/learner. The learner is instructed to perform a criteria-based self-analysis of the video and image material by themselves. The new learning concept has so far been successfully applied in seven rounds within the newly designed modular class "Intensivkurs Chirurgische Techniken" (Intensive training of surgical techniques). Result documentation and analysis via digital picture was completed by almost every student. The quality of the results was high. Interestingly the result quality did not correlate with the time needed for the exercise. The training success had a lasting effect. The new and elaborate concept improves the quality of teaching. In the long run resources for patient care should be saved when training students according to this concept prior to performing tasks in the operating theater. These resources should be allocated for further refining innovative teaching concepts.

  6. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  7. Participant satisfaction with appearance-based versus health-based educational videos promoting sunscreen use: a randomized controlled trial.

    PubMed

    Tuong, William; Armstrong, April W

    2015-02-16

    Increasing participant satisfaction with health interventions can improve compliance with recommended health behaviors and lead to better health outcomes. However, factors that influence participant satisfaction have not been well studied in dermatology-specific behavioral health interventions. We sought to assess participant satisfaction of either an appearance-based educational video or a health-based educational video promoting sunscreen use along dimensions of usefulness of educational content, message appeal, and presentation quality. In a randomized controlled trial, participants were randomized 1:1 to view an appearance-based video or a health-based video. After six weeks, participant satisfaction with the educational videos was assessed. Fifty high school students were enrolled and completed the study. Participant satisfaction ratings were assessed using a pre-tested 10-point assessment scale. The participants rated the usefulness of the appearance-based video (8.1 ± 1.2) significantly higher than the health-based video (6.4 ± 1.4, p<0.001). The message appeal of the appearance-based video (8.3 ± 1.0) was also significantly higher than the health-based video (6.6 ± 1.6, p<0.001). The presentation quality rating was similar between the appearance-based video (7.8 ± 1.3) and the health-based video (8.1 ± 1.3), p=0.676. Adolescents rated the appearance-based video higher than the health-based video in terms of usefulness of educational content and message appeal.

  8. Integration of Video-Based Demonstrations to Prepare Students for the Organic Chemistry Laboratory

    NASA Astrophysics Data System (ADS)

    Nadelson, Louis S.; Scaggs, Jonathan; Sheffield, Colin; McDougal, Owen M.

    2015-08-01

    Consistent, high-quality introductions to organic chemistry laboratory techniques effectively and efficiently support student learning in the organic chemistry laboratory. In this work, we developed and deployed a series of instructional videos to communicate core laboratory techniques and concepts. Using a quasi-experimental design, we tested the videos in five traditional laboratory experiments by integrating them with the standard pre-laboratory student preparation presentations and instructor demonstrations. We assessed the influence of the videos on student laboratory knowledge and performance, using sections of students who did not view the videos as the control. Our analysis of pre-quizzes revealed the control group had equivalent scores to the treatment group, while the post-quiz results show consistently greater learning gains for the treatment group. Additionally, the students who watched the videos as part of their pre-laboratory instruction completed their experiments in less time.

  9. Estimating contact rates at a mass gathering by using video analysis: a proof-of-concept project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainey, Jeanette J.; Cheriyadat, Anil; Radke, Richard J.

    Current approaches for estimating social mixing patterns and infectious disease transmission at mass gatherings have been limited by various constraints, including low participation rates for volunteer-based research projects and challenges in quantifying spatially and temporally accurate person-to-person interactions. We developed a proof-of-concept project to assess the use of automated video analysis for estimating contact rates of attendees of the GameFest 2013 event at Rensselaer Polytechnic Institute (RPI) in Troy, New York. Video tracking and analysis algorithms were used to estimate the number and duration of contacts for 5 attendees during a 3-minute clip from the RPI video. Attendees were consideredmore » to have a contact event if the distance between them and another person was ≤1 meter. Contact duration was estimated in seconds. We also simulated 50 attendees assuming random mixing using a geospatially accurate representation of the same GameFest location. The 5 attendees had an overall median of 2 contact events during the 3-minute video clip (range: 0 6). Contact events varied from less than 5 seconds to the full duration of the 3- minute clip. The random mixing simulation was visualized and presented as a contrasting example. We were able to estimate the number and duration of contacts for five GameFest attendees from a 3-minute video clip that can be compared to a random mixing simulation model at the same location. In conclusion, the next phase will involve scaling the system for simultaneous analysis of mixing patterns from hours-long videos and comparing our results with other approaches for collecting contact data from mass gathering attendees.« less

  10. Estimating contact rates at a mass gathering by using video analysis: a proof-of-concept project

    DOE PAGES

    Rainey, Jeanette J.; Cheriyadat, Anil; Radke, Richard J.; ...

    2014-10-24

    Current approaches for estimating social mixing patterns and infectious disease transmission at mass gatherings have been limited by various constraints, including low participation rates for volunteer-based research projects and challenges in quantifying spatially and temporally accurate person-to-person interactions. We developed a proof-of-concept project to assess the use of automated video analysis for estimating contact rates of attendees of the GameFest 2013 event at Rensselaer Polytechnic Institute (RPI) in Troy, New York. Video tracking and analysis algorithms were used to estimate the number and duration of contacts for 5 attendees during a 3-minute clip from the RPI video. Attendees were consideredmore » to have a contact event if the distance between them and another person was ≤1 meter. Contact duration was estimated in seconds. We also simulated 50 attendees assuming random mixing using a geospatially accurate representation of the same GameFest location. The 5 attendees had an overall median of 2 contact events during the 3-minute video clip (range: 0 6). Contact events varied from less than 5 seconds to the full duration of the 3- minute clip. The random mixing simulation was visualized and presented as a contrasting example. We were able to estimate the number and duration of contacts for five GameFest attendees from a 3-minute video clip that can be compared to a random mixing simulation model at the same location. In conclusion, the next phase will involve scaling the system for simultaneous analysis of mixing patterns from hours-long videos and comparing our results with other approaches for collecting contact data from mass gathering attendees.« less

  11. Effects of Viewing an Evidence-Based Video Decision Aid on Patients’ Treatment Preferences for Spine Surgery

    PubMed Central

    Lurie, Jon D.; Spratt, Kevin F.; Blood, Emily A.; Tosteson, Tor D.; Tosteson, Anna N. A.; Weinstein, James N.

    2011-01-01

    Study Design Secondary analysis within a large clinical trial Objective To evaluate the changes in treatment preference before and after watching a video decision aid as part of an informed consent process. Summary of Background Data A randomized trial with a similar decision aid in herniated disc patients had shown decreased rate of surgery in the video group, but the effect of the video on expressed preferences is not known. Methods Subjects enrolling in the Spine Patient Outcomes Research Trial (SPORT) with intervertebral disc herniation (IDH), spinal stenosis (SPS), or degenerative spondylolisthesis (DS) at thirteen multidisciplinary spine centers across the US were given an evidence-based videotape decision aid viewed prior to enrollment as part of informed consent. Results Of the 2505 patients, 86% (n=2151) watched the video and 14% (n=354) did not. Watchers shifted their preference more often than non-watchers(37.9% vs. 20.8%, p < 0.0001) and more often demonstrated a strengthened preference (26.2% vs. 11.1%, p < 0.0001). Among the 806 patients whose preference shifted after watching the video, 55% shifted toward surgery (p=0.003). Among the 617 who started with no preference, after the video 27% preferred non-operative care, 22% preferred surgery, and 51% remained uncertain. Conclusion After watching the evidence-based patient decision aid (video) used in SPORT, patients with specific lumbar spine disorders formed and/or strengthened their treatment preferences in a balanced way that did not appear biased toward or away from surgery. PMID:21358485

  12. A Participative Tool for Sharing, Annotating and Archiving Submarine Video Data

    NASA Astrophysics Data System (ADS)

    Marcon, Y.; Kottmann, R.; Ratmeyer, V.; Boetius, A.

    2016-02-01

    Oceans cover more than 70 percent of the Earth's surface and are known to play an essential role on all of the Earth systems and cycles. However, less than 5 percent of the ocean bottom has been explored and many aspects of the deep-sea world remain poorly understood. Increasing our ocean literacy is a necessity in order for specialists and non-specialists to better grasp the roles of the ocean on the Earth's system, its resources, and the impact of human activities on it. Due to technological advances, deep-sea research produces ever-increasing amounts of scientific video data. However, using such data for science communication and public outreach purposes remains difficult as tools for accessing/sharing such scientific data are often lacking. Indeed, there is no common solution for the management and analysis of marine video data, which are often scattered across multiple research institutes or working groups and it is difficult to get an overview of the whereabouts of those data. The VIDLIB Deep-Sea Video Platform is a web-based tool for sharing/annotating time-coded deep-sea video data. VIDLIB provides a participatory way to share and analyze video data. Scientists can share expert knowledge for video analysis without the need to upload/download large video files. Also, VIDLIB offers streaming capabilities and has potential for participatory science and science communication in that non-specialists can ask questions on what they see and get answers from scientists. Such a tool is highly valuable in terms of scientific public outreach and popular science. Video data are by far the most efficient way to communicate scientific findings to a non-expert public. VIDLIB is being used for studying the impact of deep-sea mining on benthic communities as well as for exploration in polar regions. We will present the structure and workflow of VIDLIB as well as an example of video analysis. VIDLIB (http://vidlib.marum.de) is funded by the EU EUROFLEET project and the Helmholtz Alliance ROBEX.

  13. Reflective dialogue in clinical supervision: A pilot study involving collaborative review of supervision videos.

    PubMed

    Hill, Hamish R M; Crowe, Trevor P; Gonsalvez, Craig J

    2016-01-01

    To pilot an intervention involving reflective dialogue based on video recordings of clinical supervision. Fourteen participants (seven psychotherapists and their supervisors) completed a reflective practice protocol after viewing a video of their most recent supervision session, then shared their reflections in a second session. Thematic analysis of individual reflections and feedback resulted in the following dominant themes: (1) Increased discussion of supervisee anxiety and the tensions between autonomy and dependence; (2) intentions to alter supervisory roles and practice; (3) identification of and reflection on parallel process (defined as the dynamic transmission of relationship patterns between therapy and supervision); and (4) a range of perceived impacts including improvements in supervisory alliance. The results suggest that reflective dialogue based on supervision videos can play a useful role in psychotherapy supervision, including with relatively inexperienced supervisees. Suggestions are provided for the encouragement of ongoing reflective dialogue in routine supervision practice.

  14. Moderating factors of video-modeling with other as model: a meta-analysis of single-case studies.

    PubMed

    Mason, Rose A; Ganz, Jennifer B; Parker, Richard I; Burke, Mack D; Camargo, Siglia P

    2012-01-01

    Video modeling with other as model (VMO) is a more practical method for implementing video-based modeling techniques, such as video self-modeling, which requires significantly more editing. Despite this, identification of contextual factors such as participant characteristics and targeted outcomes that moderate the effectiveness of VMO has not previously been explored. The purpose of this study was to meta-analytically evaluate the evidence base of VMO with individuals with disabilities to determine if participant characteristics and targeted outcomes moderate the effectiveness of the intervention. Findings indicate that VMO is highly effective for participants with autism spectrum disorder (IRD=.83) and moderately effective for participants with developmental disabilities (IRD=.68). However, differential effects are indicated across levels of moderators for diagnoses and targeted outcomes. Implications for practice and future research are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  16. “It’s Totally Okay to Be Sad, but Never Lose Hope”: Content Analysis of Infertility-Related Videos on YouTube in Relation to Viewer Preferences

    PubMed Central

    Kelly-Hedrick, Margot; Grunberg, Paul H; Brochu, Felicia

    2018-01-01

    Background Infertility patients frequently use the internet to find fertility-related information and support from people in similar circumstances. YouTube is increasingly used as a source of health-related information and may influence health decision making. There have been no studies examining the content of infertility-related videos on YouTube. Objective The purpose of this study was to (1) describe the content of highly viewed videos on YouTube related to infertility and (2) identify video characteristics that relate to viewer preference. Methods Using the search term “infertility,” the 80 top-viewed YouTube videos and their viewing statistics (eg, views, likes, and comments) were collected. Videos that were non-English, unrelated to infertility, or had age restrictions were excluded. Content analysis was used to examine videos, employing a coding rubric that measured the presence or absence of video codes related to purpose, tone, and demographic and fertility characteristics (eg, sex, parity, stage of fertility treatment). Results A total of 59 videos, with a median of 156,103 views, met the inclusion criteria and were categorized into 35 personal videos (35/59, 59%) and 24 informational-educational videos (24/59, 41%). Personal videos did not differ significantly from informational-educational videos on number of views, dislikes, subscriptions driven, or shares. However, personal videos had significantly more likes (P<.001) and comments (P<.001) than informational-educational videos. The purposes of the videos were treatment outcomes (33/59, 56%), sharing information (30/59, 51%), emotional aspects of infertility (20/59, 34%), and advice to others (6/59, 10%). The tones of the videos were positive (26/59, 44%), neutral (25/59, 42%), and mixed (8/59, 14%); there were no videos with negative tone. No videos contained only male posters. Videos with a positive tone did not differ from neutral videos in number of views, dislikes, subscriptions driven, or shares; however, positive videos had significantly more likes (P<.001) and comments (P<.001) than neutral videos. A majority (21/35, 60%) of posters of personal videos shared a pregnancy announcement. Conclusions YouTube is a source of both technical and personal experience-based information about infertility. However, videos that include personal experiences may elicit greater viewer engagement. Positive videos and stories of treatment success may provide hope to viewers but could also create and perpetuate unrealistic expectations about the success rates of fertility treatment. PMID:29792296

  17. Use of video to facilitate sideline concussion diagnosis and management decision-making.

    PubMed

    Davis, Gavin; Makdissi, Michael

    2016-11-01

    Video analysis can provide critical information to improve diagnostic accuracy and speed of clinical decision-making in potential cases of concussion. The objective of this study was to validate a hierarchical flowchart for the assessment of video signs of concussion, and to determine whether its implementation could improve the process of game day video assessment. Prospective cohort study. All impacts and collisions potentially resulting in a concussion were identified during 2012 and 2013 Australian Football League (AFL) seasons. Consensus definitions were developed for clinical signs associated with concussion. A hierarchical flowchart was developed based on the reliability and validity of the video signs of concussion. Ninety videos were assessed, with 45 incidents of clinically confirmed concussion, and 45 cases where no concussion was sustained. Each video was examined using the hierarchical flowchart, and a single response was given for each video based on the highest-ranking element in the flowchart. No protective action, impact seizure, motor incoordination or blank/vacant look were the highest ranked video signs in almost half of the clinically confirmed concussions, but in only 8.8% of non-concussed individuals. The presence of facial injury, clutching at the head and slow to get up were the highest ranked sign in 77.7% of non-concussed individuals. This study suggests that the implementation of a flowchart model could improve timely assessment of concussion, and it identifies the video signs that should trigger automatic removal from play. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  18. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  19. Extraction and analysis of neuron firing signals from deep cortical video microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerekes, Ryan A; Blundon, Jay

    We introduce a method for extracting and analyzing neuronal activity time signals from video of the cortex of a live animal. The signals correspond to the firing activity of individual cortical neurons. Activity signals are based on the changing fluorescence of calcium indicators in the cells over time. We propose a cell segmentation method that relies on a user-specified center point, from which the signal extraction method proceeds. A stabilization approach is used to reduce tissue motion in the video. The extracted signal is then processed to flatten the baseline and detect action potentials. We show results from applying themore » method to a cortical video of a live mouse.« less

  20. Paediatric palliative care by video consultation at home: a cost minimisation analysis.

    PubMed

    Bradford, Natalie K; Armfield, Nigel R; Young, Jeanine; Smith, Anthony C

    2014-07-28

    In the vast state of Queensland, Australia, access to specialist paediatric services are only available in the capital city of Brisbane, and are limited in regional and remote locations. During home-based palliative care, it is not always desirable or practical to move a patient to attend appointments, and so access to care may be even further limited. To address these problems, at the Royal Children's Hospital (RCH) in Brisbane, a Home Telehealth Program (HTP) has been successfully established to provide palliative care consultations to families throughout Queensland. A cost minimisation analysis was undertaken to compare the actual costs of the HTP consultations, with the estimated potential costs associated with face-to face-consultations occurring by either i) hospital based consultations in the outpatients department at the RCH, or ii) home visits from the Paediatric Palliative Care Service. The analysis was undertaken from the perspective of the Children's Health Service. The analysis was based on data from 95 home video consultations which occurred over a two year period, and included costs associated with projected: clinician time and travel; costs reimbursed to families for travel through the Patients Travel Subsidy (PTS) scheme; hospital outpatient clinic costs, project co-ordination and equipment and infrastructure costs. The mean costs per consultation were calculated for each approach. Air travel (n = 24) significantly affected the results. The mean cost of the HTP intervention was $294 and required no travel. The estimated mean cost per consultation in the hospital outpatient department was $748. The mean cost of home visits per consultation was $1214. Video consultation in the home is the most economical method of providing a consultation. The largest costs avoided to the health service are those associated with clinician time required for travel and the PTS scheme. While face-to-face consultations are the gold standard of care, for families located at a distance from the hospital, video consultation in the home presents an effective and cost efficient method to deliver a consultation. Additionally video consultation in the home ensures equity of access to services and minimum disruption to hospital based palliative care teams.

  1. Lameness detection in dairy cattle: single predictor v. multivariate analysis of image-based posture processing and behaviour and performance sensing.

    PubMed

    Van Hertem, T; Bahr, C; Schlageter Tello, A; Viazzi, S; Steensels, M; Romanini, C E B; Lokhorst, C; Maltz, E; Halachmi, I; Berckmans, D

    2016-09-01

    The objective of this study was to evaluate if a multi-sensor system (milk, activity, body posture) was a better classifier for lameness than the single-sensor-based detection models. Between September 2013 and August 2014, 3629 cow observations were collected on a commercial dairy farm in Belgium. Human locomotion scoring was used as reference for the model development and evaluation. Cow behaviour and performance was measured with existing sensors that were already present at the farm. A prototype of three-dimensional-based video recording system was used to quantify automatically the back posture of a cow. For the single predictor comparisons, a receiver operating characteristics curve was made. For the multivariate detection models, logistic regression and generalized linear mixed models (GLMM) were developed. The best lameness classification model was obtained by the multi-sensor analysis (area under the receiver operating characteristics curve (AUC)=0.757±0.029), containing a combination of milk and milking variables, activity and gait and posture variables from videos. Second, the multivariate video-based system (AUC=0.732±0.011) performed better than the multivariate milk sensors (AUC=0.604±0.026) and the multivariate behaviour sensors (AUC=0.633±0.018). The video-based system performed better than the combined behaviour and performance-based detection model (AUC=0.669±0.028), indicating that it is worthwhile to consider a video-based lameness detection system, regardless the presence of other existing sensors in the farm. The results suggest that Θ2, the feature variable for the back curvature around the hip joints, with an AUC of 0.719 is the best single predictor variable for lameness detection based on locomotion scoring. In general, this study showed that the video-based back posture monitoring system is outperforming the behaviour and performance sensing techniques for locomotion scoring-based lameness detection. A GLMM with seven specific variables (walking speed, back posture measurement, daytime activity, milk yield, lactation stage, milk peak flow rate and milk peak conductivity) is the best combination of variables for lameness classification. The accuracy on four-level lameness classification was 60.3%. The accuracy improved to 79.8% for binary lameness classification. The binary GLMM obtained a sensitivity of 68.5% and a specificity of 87.6%, which both exceed the sensitivity (52.1%±4.7%) and specificity (83.2%±2.3%) of the multi-sensor logistic regression model. This shows that the repeated measures analysis in the GLMM, taking into account the individual history of the animal, outperforms the classification when thresholds based on herd level (a statistical population) are used.

  2. Students' Aesthetic Experiences of Playing Exergames: A Practical Epistemology Analysis of Learning

    ERIC Educational Resources Information Center

    Maivorsdotter, Ninitha; Quennerstedt, Mikael; Öhman, Marie

    2015-01-01

    The aim of this study was to explore Swedish junior high school students meaning-making of participating in exergaming in school based on their aesthetic judgments during game play. A transactional approach, drawing on the work of John Dewey, was used in the study and the data consisted of video- and audio recordings of ongoing video gaming. A…

  3. (Re)Counting Meaningful Learning Experiences: Using Student-Created Reflective Videos to Make Invisible Learning Visible during PjBL Experiences

    ERIC Educational Resources Information Center

    Smith, Shaunna

    2016-01-01

    This ethnographic case study investigated how the process of learning during a yearlong after-school, project-based learning (PjBL) experience could be documented by student-created reflective videos. Guided by social constructivism, constant comparative analysis was used to explore the meaningful learning that took place in addition to the…

  4. Using Image Modelling to Teach Newton's Laws with the Ollie Trick

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Vianna, Deise Miranda

    2016-01-01

    Image modelling is a video-based teaching tool that is a combination of strobe images and video analysis. This tool can enable a qualitative and a quantitative approach to the teaching of physics, in a much more engaging and appealling way than the traditional expositive practice. In a specific scenario shown in this paper, the Ollie trick, we…

  5. Comparing Real-time Versus Delayed Video Assessments for Evaluating ACGME Sub-competency Milestones in Simulated Patient Care Environments

    PubMed Central

    Stiegler, Marjorie; Hobbs, Gene; Martinelli, Susan M; Zvara, David; Arora, Harendra; Chen, Fei

    2018-01-01

    Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment. PMID:29736352

  6. Mode extraction on wind turbine blades via phase-based video motion estimation

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  7. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L 2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  8. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  9. Using video analysis for concussion surveillance in Australian football.

    PubMed

    Makdissi, Michael; Davis, Gavin

    2016-12-01

    The objectives of the study were to assess the relationship between various player and game factors and risk of concussion; and to assess the reliability of video analysis for mechanistic assessment of concussion in Australian football. Prospective cohort study. All impacts and collisions resulting in concussion were identified during the 2011 Australian Football League season. An extensive list of factors for assessment was created based upon previous analysis of concussion in Australian Football League and expert opinions. The authors independently reviewed the video clips and correlation for each factor was examined. A total of 82 concussions were reported in 194 games (rate: 8.7 concussions per 1000 match hours; 95% confidence interval: 6.9-10.5). Player demographics and game variables such as venue, timing of the game (day, night or twilight), quarter, travel status (home or interstate) or score margin did not demonstrate a significant relationship with risk of concussion; although a higher percentage of concussions occurred in the first 5min of game time of the quarter (36.6%), when compared to the last 5min (20.7%). Variables with good inter-rater agreement included position on the ground, circumstances of the injury and cause of the impact. The remainder of the variables assessed had fair-poor inter-rater agreement. Common problems included insufficient or poor quality video and interpretation issues related to the definitions used. Clear definitions and good quality video from multiple camera angles are required to improve the utility of video analysis for concussion surveillance in Australian football. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  10. Science Teacher Efficacy and Extrinsic Factors Toward Professional Development Using Video Games in a Design-Based Research Model: The Next Generation of STEM Learning

    NASA Astrophysics Data System (ADS)

    Annetta, Leonard A.; Frazier, Wendy M.; Folta, Elizabeth; Holmes, Shawn; Lamb, Richard; Cheng, Meng-Tzu

    2013-02-01

    Designed-based research principles guided the study of 51 secondary-science teachers in the second year of a 3-year professional development project. The project entailed the creation of student-centered, inquiry-based, science, video games. A professional development model appropriate for infusing innovative technologies into standards-based curricula was employed to determine how science teacher's attitudes and efficacy where impacted while designing science-based video games. The study's mixed-method design ascertained teacher efficacy on five factors (General computer use, Science Learning, Inquiry Teaching and Learning, Synchronous chat/text, and Playing Video Games) related to technology and gaming using a web-based survey). Qualitative data in the form of online blog posts was gathered during the project to assist in the triangulation and assessment of teacher efficacy. Data analyses consisted of an Analysis of Variance and serial coding of teacher reflective responses. Results indicated participants who used computers daily have higher efficacy while using inquiry-based teaching methods and science teaching and learning. Additional emergent findings revealed possible motivating factors for efficacy. This professional development project was focused on inquiry as a pedagogical strategy, standard-based science learning as means to develop content knowledge, and creating video games as technological knowledge. The project was consistent with the Technological Pedagogical Content Knowledge (TPCK) framework where overlapping circles of the three components indicates development of an integrated understanding of the suggested relationships. Findings provide suggestions for development of standards-based science education software, its integration into the curriculum and, strategies for implementing technology into teaching practices.

  11. How useful is YouTube in learning heart anatomy?

    PubMed

    Raikos, Athanasios; Waidyasekara, Pasan

    2014-01-01

    Nowadays more and more modern medical degree programs focus on self-directed and problem-based learning. That requires students to search for high quality and easy to retrieve online resources. YouTube is an emerging platform for learning human anatomy due to easy access and being a free service. The purpose of this study is to make a quantitative and qualitative analysis of the available human heart anatomy videos on YouTube. Using the search engine of the platform we searched for relevant videos using various keywords. Videos with irrelevant content, animal tissue, non-English language, no sound, duplicates, and physiology focused were excluded from further elaboration. The initial search retrieved 55,525 videos, whereas only 294 qualified for further analysis. A unique scoring system was used to assess the anatomical quality and details, general quality, and the general data for each video. Our results indicate that the human heart anatomy videos available on YouTube conveyed our anatomical criteria poorly, whereas the general quality scoring found borderline. Students should be selective when looking up on public video databases as it can prove challenging, time consuming, and the anatomical information may be misleading due to absence of content review. Anatomists and institutions are encouraged to prepare and endorse good quality material and make them available online for the students. The scoring rubric used in the study comprises a valuable tool to faculty members for quality evaluation of heart anatomy videos available on social media platforms. Copyright © 2013 American Association of Anatomists.

  12. The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).

    PubMed

    Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S

    2016-12-01

    The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.

  13. Video ethnography during and after caesarean sections: methodological challenges.

    PubMed

    Stevens, Jeni; Schmied, Virginia; Burns, Elaine; Dahlen, Hannah G

    2017-07-01

    To describe the challenges of, and steps taken to successfully collect video ethnographic data during and after caesarean sections. Video ethnographic research uses real-time video footage to study a cultural group or phenomenon in the natural environment. It allows researchers to discover previously undocumented practices, which in-turn provides insight into strengths and weaknesses in practice. This knowledge can be used to translate evidence-based interventions into practice. Video ethnographic design. A video ethnographic approach was used to observe the contact between mothers and babies immediately after elective caesarean sections in a tertiary hospital in Sydney, Australia. Women, their support people and staff participated in the study. Data were collected via video footage and field notes in the operating theatre, recovery and the postnatal ward. Challenges faced whilst conducting video ethnographic research included attaining ethics approval, recruiting vast numbers of staff members and 'vulnerable' pregnant women, and endeavouring to be a 'fly on the wall' and a 'complete observer'. There were disadvantages being an 'insider' whilst conducting the research because occasionally staff members requested help with clinical tasks whilst collecting data; however, it was an advantage as it enabled ease of access to the environment and staff members that were to be recruited. Despite the challenges, video ethnographic research enabled the provision of unique data that could not be attained by any other means. Video ethnographic data are beneficial as it provides exceptionally rich data for in-depth analysis of interactions between the environment, equipment and people in the hospital environment. The analysis of this type of data can then be used to inform improvements for future care. © 2016 John Wiley & Sons Ltd.

  14. A Content Analysis of YouTubeTM Videos Related to Prostate Cancer.

    PubMed

    Basch, Corey H; Menafro, Anthony; Mongiovi, Jennifer; Hillyer, Grace Clarke; Basch, Charles E

    2016-09-29

    In the United States, prostate cancer is the most common type of cancer in men after skin cancer. There is a paucity of research devoted to the types of prostate cancer information available on social media outlets. YouTube TM is a widely used video sharing website, which is emerging as commonplace for information related to health. The purpose of this study was to describe the most widely viewed YouTube TM videos related to prostate cancer. The 100 videos were watched a total of 50,278,770 times. The majority of videos were uploaded by consumers (45.0%) and medical or government professionals (30%). The purpose of most videos (78.0%) was to provide information, followed by discussions of prostate cancer treatment (51%) and prostate-specific antigen testing and routine screening (26%). All videos uploaded by medical and government professionals and 93.8% of videos uploaded by news sources provided information compared with about two thirds of consumer and less than one half of commercial and advertisement videos (p < .001). As society becomes increasingly technology-based, there is a need to help consumers acquire knowledge and skills to identify credible information to help inform their decisions. © The Author(s) 2016.

  15. A video-based learning activity is effective for preparing physiotherapy students for practical examinations.

    PubMed

    Weeks, Benjamin K; Horan, Sean A

    2013-12-01

    To examine a video-based learning activity for engaging physiotherapy students in preparation for practical examinations and determine student performance outcomes. Multi-method employing qualitative and quantitative data collection procedures. Tertiary education facility on the Gold Coast, Queensland, Australia. Physiotherapy students in their first year of a two-year graduate entry program. Questionnaire-based surveys and focus groups were used to examine student perceptions and satisfaction. Surveys were analysed based on the frequency of responses to closed questions made on a 5-pont Likert scale, while a thematic analysis was performed on focus group transcripts. t-Tests were used to compare student awarded marks and examiner awarded marks and evaluate student performance. Sixty-two physiotherapy students participated in the study. Mean response rate for questionnaires was 93% and eight students (13%) participated in the focus group. Participants found the video resources effective to support their learning (98% positive) and rating the video examples to be an effective learning activity (96% positive). Themes emergent from focus group responses were around improved understanding, reduced performance anxiety, and enjoyment. Students were, however, critical of the predictable nature of the example performances. Students in the current cohort supported by the video-based preparation activity exhibited greater practical examination marks than those from the previous year who were unsupported by the activity (mean 81.6 SD 8.7 vs. mean 78.1 SD 9.0, p=0.01). A video-based learning activity was effective for preparing physiotherapy students for practical examinations and conferred benefits of reduced anxiety and improved performance. Copyright © 2013 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  16. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  17. Distracted driving on YouTube: implications for adolescents.

    PubMed

    Basch, Corey H; Mouser, Christina; Clark, Ashley

    2017-05-18

    For the first time in 50 years, traffic fatalities have increased in the United States (US). With the emergence of technology, comes the possibility, that distracted driving has contributed to a decrease in safe driving practices. The purpose of this study was to describe the content on the popular video sharing site, YouTube to ascertain the type of content conveyed in videos that are widely viewed. The 100 most widely viewed English language videos were included in this sample, with a collective number of views of over 35 million. The majority of videos were television-based and Internet-based. Pairwise comparisons indicated that there were statistically significant differences between the number of views of consumer generated videos and television-based videos (p = 0.001) and between television-based videos and Internet-based videos (p < 0.001). Compared with consumer generated videos, television-based videos were 13 times more likely to discuss cell phone use as a distractor while driving, while Internet-based videos were 6.6 times more likely to discuss cell phone use as a distractor while driving. In addition, compared with consumer generated videos, television-based videos were 3.67 times more likely to discuss texting as a distractor while driving, whereas Internet-based videos were 8.5 times more likely to discuss texting as a distractor while driving. The findings of this study indicate that the videos on YouTube related to distracted driving are popular and that this medium could prove to be a successful venue to communicate information about this emergent public health issue.

  18. Video-based heart rate monitoring across a range of skin pigmentations during an acute hypoxic challenge.

    PubMed

    Addison, Paul S; Jacquel, Dominique; Foo, David M H; Borg, Ulf R

    2017-11-09

    The robust monitoring of heart rate from the video-photoplethysmogram (video-PPG) during challenging conditions requires new analysis techniques. The work reported here extends current research in this area by applying a motion tolerant algorithm to extract high quality video-PPGs from a cohort of subjects undergoing marked heart rate changes during a hypoxic challenge, and exhibiting a full range of skin pigmentation types. High uptimes in reported video-based heart rate (HR vid ) were targeted, while retaining high accuracy in the results. Ten healthy volunteers were studied during a double desaturation hypoxic challenge. Video-PPGs were generated from the acquired video image stream and processed to generate heart rate. HR vid was compared to the pulse rate posted by a reference pulse oximeter device (HR p ). Agreement between video-based heart rate and that provided by the pulse oximeter was as follows: Bias = - 0.21 bpm, RMSD = 2.15 bpm, least squares fit gradient = 1.00 (Pearson R = 0.99, p < 0.0001), with a 98.78% reporting uptime. The difference between the HR vid and HR p exceeded 5 and 10 bpm, for 3.59 and 0.35% of the reporting time respectively, and at no point did these differences exceed 25 bpm. Excellent agreement was found between the HR vid and HR p in a study covering the whole range of skin pigmentation types (Fitzpatrick scales I-VI), using standard room lighting and with moderate subject motion. Although promising, further work should include a larger cohort with multiple subjects per Fitzpatrick class combined with a more rigorous motion and lighting protocol.

  19. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  20. Innovative Uses of Video Analysis

    ERIC Educational Resources Information Center

    Brown, Douglas; Cox, Anne J.

    2009-01-01

    The value of video analysis in physics education is well established, and both commercial and free educational video analysis programs are readily available. The video format is familiar to students, contains a wealth of spatial and temporal data, and provides a bridge between direct observations and abstract representations of physical phenomena.…

  1. A comparison of video modeling, text-based instruction, and no instruction for creating multiple baseline graphs in Microsoft Excel.

    PubMed

    Tyner, Bryan C; Fienup, Daniel M

    2015-09-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed. © Society for the Experimental Analysis of Behavior.

  2. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment.

    PubMed

    Conklin, Emily E; Lee, Kathyann L; Schlabach, Sadie A; Woods, Ian G

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs.

  3. Eustachian Tube Mucosal Inflammation Scale Validation Based on Digital Video Images.

    PubMed

    Kivekäs, Ilkka; Pöyhönen, Leena; Aarnisalo, Antti; Rautiainen, Markus; Poe, Dennis

    2015-12-01

    The most common cause for Eustachian tube dilatory dysfunction is mucosal inflammation. The aim of this study was to validate a scale for Eustachian tube mucosal inflammation, based on digital video clips obtained during diagnostic rigid endoscopy. A previously described four-step scale for grading the degree of inflammation of the mucosa of the Eustachian tube lumen was used for this validation study. A tutorial for use of the scale, including static images and 10 second video clips, was presented to 26 clinicians with various levels of experience. Each clinician then reviewed 35 short digital video samples of Eustachian tubes from patients and rated the degree of inflammation. A subset of the clinicians performed a second rating of the same video clips at a subsequent time. Statistical analysis of the ratings provided inter- and intrarater reliability scores. Twenty-six clinicians with various levels of experience rated a total of 35 videos. Thirteen clinicians rated the videos twice. The overall correlation coefficient for the rating of inflammation severity was relatively good (0.74, 95% confidence interval, 0.72-0.76). The intralevel correlation coefficient for intrarater reliability was high (0.86). For those who rated videos twice, the intralevel correlation coefficient improved after the first rating (0.73, to 0.76), but improvement was not statistically significant. The inflammation scale used for Eustachian tube mucosal inflammation is reliable and this scale can be used with a high level of consistency by clinicians with various levels of experience.

  4. Transana Qualitative Video and Audio Analysis Software as a Tool for Teaching Intellectual Assessment Skills to Graduate Psychology Students

    ERIC Educational Resources Information Center

    Rush, S. Craig

    2014-01-01

    This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…

  5. Chromatic Image Analysis For Quantitative Thermal Mapping

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  6. Evaluating YouTube as a Source of Patient Education on the Role of the Hospitalist: A Cross-Sectional Study

    PubMed Central

    Hudali, Tamer; Bhattarai, Mukul; Deckard, Alan; Hingle, Susan

    2017-01-01

    Background Hospital medicine is a relatively new specialty field, dedicated to the delivery of comprehensive medical care to hospitalized patients. YouTube is one of the most frequently used websites, offering access to a gamut of videos from self-produced to professionally made. Objective The aim of our study was to determine the adequacy of YouTube as an effective means to define and depict the role of hospitalists. Methods YouTube was searched on November 17, 2014, using the following search words: “hospitalist,” “hospitalist definition,” “what is the role of a hospitalist,” “define hospitalist,” and “who is a hospitalist.” Videos found only in the first 10 pages of each search were included. Non-English, noneducational, and nonrelevant videos were excluded. A novel 7-point scoring tool was created by the authors based on the definition of a hospitalist adopted by the Society of Hospital Medicine. Three independent reviewers evaluated, scored, and classified the videos into high, intermediate, and low quality based on the average score. Results A total of 102 videos out of 855 were identified as relevant and included in the analysis. Videos uploaded by academic institutions had the highest mean score. Only 6 videos were classified as high quality, 53 as intermediate quality, and 42 as low quality, with 82.4% (84/102) of the videos scoring an average of 4 or less. Conclusions Most videos found in the search of a hospitalist definition are inadequate. Leading medical organizations and academic institutions should consider producing and uploading quality videos to YouTube to help patients and their families better understand the roles and definition of the hospitalist. PMID:28073738

  7. Equipment issues regarding the collection of video data for research

    NASA Astrophysics Data System (ADS)

    Kung, Rebecca Lippmann; Kung, Peter; Linder, Cedric

    2005-12-01

    Physics education research increasingly makes use of video data for analysis of student learning and teaching practice. Collection of these data is conceptually simple but execution is often fraught with costly and time-consuming complications. This pragmatic paper discusses the development of systems to record and permanently archive audio and video data in real-time. We focus on a system based upon consumer video DVD recorders, but also give an overview of other technologies and detail issues common to all systems. We detail common yet unexpected complications, particularly with regard to sound quality and compatibility with transcription software. Information specific to fixed and transportable systems, other technology options, and generic and specific equipment recommendations are given in supplemental appendices

  8. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  9. Deep visual-semantic for crowded video understanding

    NASA Astrophysics Data System (ADS)

    Deng, Chunhua; Zhang, Junwen

    2018-03-01

    Visual-semantic features play a vital role for crowded video understanding. Convolutional Neural Networks (CNNs) have experienced a significant breakthrough in learning representations from images. However, the learning of visualsemantic features, and how it can be effectively extracted for video analysis, still remains a challenging task. In this study, we propose a novel visual-semantic method to capture both appearance and dynamic representations. In particular, we propose a spatial context method, based on the fractional Fisher vector (FV) encoding on CNN features, which can be regarded as our main contribution. In addition, to capture temporal context information, we also applied fractional encoding method on dynamic images. Experimental results on the WWW crowed video dataset demonstrate that the proposed method outperform the state of the art.

  10. Qualitative analysis of Parkinson's disease information on social media: the case of YouTube™.

    PubMed

    Al-Busaidi, Ibrahim Saleh; Anderson, Tim J; Alamri, Yassar

    2017-09-01

    There is a paucity of data pertaining to the usefulness of information presented on social media platforms on chronic neuropsychiatric conditions such as Parkinson's disease (PD). The aim of this study was to examine the quality of YouTube™ videos that deliver general information on PD and the availability and design of instructional videos addressing the caregiving role in PD. YouTube™ was searched using the keyword "Parkinson's disease" for relevant videos. Videos were assessed for usefulness and accuracy based on pre-defined criteria. Data on video characteristics including total viewership, duration, ratings, and source of videos were collated. Instructional PD videos that addressed the role of caregivers were examined closely for the design and scope of instructional content. A total of 100 videos met the inclusion criteria. Just under a third of videos (28%) was uploaded by trusted academic organisations. Overall, 15% of PD videos were found to be somewhat useful and only 4% were assessed as providing very useful PD information; 3% of surveyed videos were misleading. The mean number of video views (regardless of video source) was not significantly different between the different video ratings ( p  = 0.86). Although personal videos trended towards being less useful than videos from academic organisations, this association was not statistically significant ( p  = 0.13). To our knowledge, this is the first study to assess the usefulness of PD information on the largest video-sharing website, YouTube™. In general, the overall quality of information presented in the videos screened was mediocre. Viewership of accurate vs. misleading information was, however, very similar. Therefore, healthcare providers should direct PD patients and their families to the resources that provide reliable and accurate information.

  11. Reliability verification of vehicle speed estimate method in forensic videos.

    PubMed

    Kim, Jong-Hyuk; Oh, Won-Taek; Choi, Ji-Hun; Park, Jong-Chan

    2018-06-01

    In various types of traffic accidents, including car-to-car crash, vehicle-pedestrian collision, and hit-and-run accident, driver overspeed is one of the critical issues of traffic accident analysis. Hence, analysis of vehicle speed at the moment of accident is necessary. The present article proposes a vehicle speed estimate method (VSEM) applying a virtual plane and a virtual reference line to a forensic video. The reliability of the VSEM was verified by comparing the results obtained by applying the VSEM to videos from a test vehicle driving with a global positioning system (GPS)-based Vbox speed. The VSEM verified by these procedures was applied to real traffic accident examples to evaluate the usability of the VSEM. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Video-Based Analyses of Motivation and Interaction in Science Classrooms

    NASA Astrophysics Data System (ADS)

    Moeller Andersen, Hanne; Nielsen, Birgitte Lund

    2013-04-01

    An analytical framework for examining students' motivation was developed and used for analyses of video excerpts from science classrooms. The framework was developed in an iterative process involving theories on motivation and video excerpts from a 'motivational event' where students worked in groups. Subsequently, the framework was used for an analysis of students' motivation in the whole class situation. A cross-case analysis was carried out illustrating characteristics of students' motivation dependent on the context. This research showed that students' motivation to learn science is stimulated by a range of different factors, with autonomy, relatedness and belonging apparently being the main sources of motivation. The teacher's combined use of questions, uptake and high level evaluation was very important for students' learning processes and motivation, especially students' self-efficacy. By coding and analysing video excerpts from science classrooms, we were able to demonstrate that the analytical framework helped us gain new insights into the effect of teachers' communication and other elements on students' motivation.

  13. Tsunami Research driven by Survivor Observations: Sumatra 2004, Tohoku 2011 and the Lituya Bay Landslide (Plinius Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Fritz, Hermann M.

    2014-05-01

    The 10th anniversary of the 2004 Indian Ocean tsunami recalls the advent of tsunami video recordings by eyewitnesses. The tsunami of December 26, 2004 severely affected Banda Aceh along the North tip of Sumatra (Indonesia) at a distance of 250 km from the epicenter of the Magnitude 9.0 earthquake. The tsunami flow velocity analysis focused on two survivor videos recorded within Banda Aceh more than 3km from the open ocean. The exact locations of the tsunami eyewitness video recordings were revisited to record camera calibration ground control points. The motion of the camera during the recordings was determined. The individual video images were rectified with a direct linear transformation (DLT). Finally a cross-correlation based particle image velocimetry (PIV) analysis was applied to the rectified video images to determine instantaneous tsunami flow velocity fields. The measured overland tsunami flow velocities were within the range of 2 to 5 m/s in downtown Banda Aceh, Indonesia. The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of Japan caused catastrophic damage and loss of life. Fortunately many survivors at evacuation sites recorded countless tsunami videos with unprecedented spatial and temporal coverage. Numerous tsunami reconnaissance trips were conducted in Japan. This report focuses on the surveys at selected tsunami eyewitness video recording locations along Japan's Sanriku coast and the subsequent tsunami video image analysis. Locations with high quality survivor videos were visited, eyewitnesses interviewed and detailed site topography scanned with a terrestrial laser scanner (TLS). The analysis of the tsunami videos followed the four step procedure developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh. Tsunami currents up to 11 m/s were measured in Kesennuma Bay making navigation impossible. Further tsunami height and runup hydrographs are derived from the videos to discuss the complex effects of coastal structures on inundation and outflow flow velocities. Tsunamis generated by landslides and volcanic island collapses account for some of the most catastrophic events. On July 10, 1958, an earthquake Mw 8.3 along the Fairweather fault triggered a major subaerial landslide into Gilbert Inlet at the head of Lituya Bay on the south coast of Alaska. The landslide impacted the water at high speed generating a giant tsunami and the highest wave runup in recorded history. This event was observed by eyewitnesses on board the sole surviving fishing boat, which managed to ride the tsunami. The mega-tsunami runup to an elevation of 524 m caused total forest destruction and erosion down to bedrock on a spur ridge in direct prolongation of the slide axis. A cross-section of Gilbert Inlet was rebuilt in a two dimensional physical laboratory model. Particle image velocimetry (PIV) provided instantaneous velocity vector fields of decisive initial phase with landslide impact and wave generation as well as the runup on the headland. Three dimensional source and runup scenarios based on real world events are physically modeled in the NEES tsunami wave basin (TWB) at Oregon State University (OSU). The measured landslide and tsunami data serve to validate and advance numerical landslide tsunami models. This lecture encompasses multi-hazard aspects and implications of recent tsunami and cyclonic events around the world such as the November 2013 Typhoon Haiyan (Yolanda) in the Philippines.

  14. "It's Totally Okay to Be Sad, but Never Lose Hope": Content Analysis of Infertility-Related Videos on YouTube in Relation to Viewer Preferences.

    PubMed

    Kelly-Hedrick, Margot; Grunberg, Paul H; Brochu, Felicia; Zelkowitz, Phyllis

    2018-05-23

    Infertility patients frequently use the internet to find fertility-related information and support from people in similar circumstances. YouTube is increasingly used as a source of health-related information and may influence health decision making. There have been no studies examining the content of infertility-related videos on YouTube. The purpose of this study was to (1) describe the content of highly viewed videos on YouTube related to infertility and (2) identify video characteristics that relate to viewer preference. Using the search term "infertility," the 80 top-viewed YouTube videos and their viewing statistics (eg, views, likes, and comments) were collected. Videos that were non-English, unrelated to infertility, or had age restrictions were excluded. Content analysis was used to examine videos, employing a coding rubric that measured the presence or absence of video codes related to purpose, tone, and demographic and fertility characteristics (eg, sex, parity, stage of fertility treatment). A total of 59 videos, with a median of 156,103 views, met the inclusion criteria and were categorized into 35 personal videos (35/59, 59%) and 24 informational-educational videos (24/59, 41%). Personal videos did not differ significantly from informational-educational videos on number of views, dislikes, subscriptions driven, or shares. However, personal videos had significantly more likes (P<.001) and comments (P<.001) than informational-educational videos. The purposes of the videos were treatment outcomes (33/59, 56%), sharing information (30/59, 51%), emotional aspects of infertility (20/59, 34%), and advice to others (6/59, 10%). The tones of the videos were positive (26/59, 44%), neutral (25/59, 42%), and mixed (8/59, 14%); there were no videos with negative tone. No videos contained only male posters. Videos with a positive tone did not differ from neutral videos in number of views, dislikes, subscriptions driven, or shares; however, positive videos had significantly more likes (P<.001) and comments (P<.001) than neutral videos. A majority (21/35, 60%) of posters of personal videos shared a pregnancy announcement. YouTube is a source of both technical and personal experience-based information about infertility. However, videos that include personal experiences may elicit greater viewer engagement. Positive videos and stories of treatment success may provide hope to viewers but could also create and perpetuate unrealistic expectations about the success rates of fertility treatment. ©Margot Kelly-Hedrick, Paul H Grunberg, Felicia Brochu, Phyllis Zelkowitz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 23.05.2018.

  15. Layer-based buffer aware rate adaptation design for SHVC video streaming

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  16. Assessment of Information on Concussion Available to Adolescents on Social Media.

    PubMed

    Kollia, Betty; Basch, Corey H; Mouser, Christina; Deleon, Aurea J

    2018-01-01

    Considering how many people obtain information about their health online, the aim of this study was to describe the content of the currently most widely viewed YouTube videos related to concussions and to test the hypothesis that consumer videos would be anecdotal, while other sources would be more informational. The term "concussion" was used to search for videos with 100,000 or more views on YouTube that were posted in English or Spanish. Descriptive information about each video was recorded, as was information on whether certain content was conveyed during the video. The main outcome measures are sources of upload and content of videos. Consumer videos accounted for 48% of the videos, television based accounted for 50% of the videos, and internet based accounted for only 2% of the videos. None of the videos viewed fell into the professional category. Television based videos were viewed significantly more than consumer or internet based videos. Consumer and television based videos were equally anecdotal. Many of the videos focused on adolescents and were related to sports injuries. The majority of the videos (70.4%) addressed concussion causes, with 48% stating sports. Few videos discussed symptoms of concussion and prevention. The potential for widespread misinformation necessitates caution when obtaining information on concussion on a freely accessible and editable medium, such as YouTube.

  17. Resource optimized TTSH-URA for multimedia stream authentication in swallowable-capsule-based wireless body sensor networks.

    PubMed

    Wang, Wei; Wang, Chunqiu; Zhao, Min

    2014-03-01

    To ease the burdens on the hospitalization capacity, an emerging swallowable-capsule technology has evolved to serve as a remote gastrointestinal (GI) disease examination technique with the aid of the wireless body sensor network (WBSN). Secure multimedia transmission in such a swallowable-capsule-based WBSN faces critical challenges including energy efficiency and content quality guarantee. In this paper, we propose a joint resource allocation and stream authentication scheme to maintain the best possible video quality while ensuring security and energy efficiency in GI-WBSNs. The contribution of this research is twofold. First, we establish a unique signature-hash (S-H) diversity approach in the authentication domain to optimize video authentication robustness and the authentication bit rate overhead over a wireless channel. Based on the full exploration of S-H authentication diversity, we propose a new two-tier signature-hash (TTSH) stream authentication scheme to improve the video quality by reducing authentication dependence overhead while protecting its integrity. Second, we propose to combine this authentication scheme with a unique S-H oriented unequal resource allocation (URA) scheme to improve the energy-distortion-authentication performance of wireless video delivery in GI-WBSN. Our analysis and simulation results demonstrate that the proposed TTSH with URA scheme achieves considerable gain in both authenticated video quality and energy efficiency.

  18. Interaction Support for Information Finding and Comparative Analysis in Online Video

    ERIC Educational Resources Information Center

    Xia, Jinyue

    2017-01-01

    Current online video interaction is typically designed with a focus on straightforward distribution and passive consumption of individual videos. This "click play, sit back and watch" context is typical of videos for entertainment. However, there are many task scenarios that require active engagement and analysis of video content as a…

  19. Real-time video analysis for retail stores

    NASA Astrophysics Data System (ADS)

    Hassan, Ehtesham; Maurya, Avinash K.

    2015-03-01

    With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.

  20. An effective and robust method for tracking multiple fish in video image based on fish head detection.

    PubMed

    Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu

    2016-06-23

    Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.

  1. Network Analysis of an Emergent Massively Collaborative Creation on Video Sharing Website

    NASA Astrophysics Data System (ADS)

    Hamasaki, Masahiro; Takeda, Hideaki; Nishimura, Takuichi

    The Web technology enables numerous people to collaborate in creation. We designate it as massively collaborative creation via the Web. As an example of massively collaborative creation, we particularly examine video development on Nico Nico Douga, which is a video sharing website that is popular in Japan. We specifically examine videos on Hatsune Miku, a version of a singing synthesizer application software that has inspired not only song creation but also songwriting, illustration, and video editing. As described herein, creators of interact to create new contents through their social network. In this paper, we analyzed the process of developing thousands of videos based on creators' social networks and investigate relationships among creation activity and social networks. The social network reveals interesting features. Creators generate large and sparse social networks including some centralized communities, and such centralized community's members shared special tags. Different categories of creators have different roles in evolving the network, e.g., songwriters gather more links than other categories, implying that they are triggers to network evolution.

  2. Automatic movie skimming with general tempo analysis

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.

    2003-11-01

    Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.

  3. Video-tracker trajectory analysis: who meets whom, when and where

    NASA Astrophysics Data System (ADS)

    Jäger, U.; Willersinn, D.

    2010-04-01

    Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.

  4. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    PubMed

    Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei

    2015-01-01

    Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  5. Analysis of swimming performance: perceptions and practices of US-based swimming coaches.

    PubMed

    Mooney, Robert; Corley, Gavin; Godfrey, Alan; Osborough, Conor; Newell, John; Quinlan, Leo Richard; ÓLaighin, Gearóid

    2016-01-01

    In elite swimming, a broad range of methods are used to assess performance, inform coaching practices and monitor athletic progression. The aim of this paper was to examine the performance analysis practices of swimming coaches and to explore the reasons behind the decisions that coaches take when analysing performance. Survey data were analysed from 298 Level 3 competitive swimming coaches (245 male, 53 female) based in the United States. Results were compiled to provide a generalised picture of practices and perceptions and to examine key emerging themes. It was found that a disparity exists between the importance swim coaches place on biomechanical analysis of swimming performance and the types of analyses that are actually conducted. Video-based methods are most frequently employed, with over 70% of coaches using these methods at least monthly, with analyses being mainly qualitative in nature rather than quantitative. Barriers to the more widespread use of quantitative biomechanical analysis in elite swimming environments were explored. Constraints include time, cost and availability of resources, but other factors such as sources of information on swimming performance and analysis and control over service provision are also discussed, with particular emphasis on video-based methods and emerging sensor-based technologies.

  6. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  7. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-10-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.

  8. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed Central

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-01-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444

  9. Joint modality fusion and temporal context exploitation for semantic video analysis

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.

    2011-12-01

    In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.

  10. Knowledge-based understanding of aerial surveillance video

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren

    2006-05-01

    Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.

  11. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  12. Video registration of trauma team performance in the emergency department: the results of a 2-year analysis in a Level 1 trauma center.

    PubMed

    Lubbert, Pieter H W; Kaasschieter, Edgar G; Hoorntje, Lidewij E; Leenen, Loek P H

    2009-12-01

    Trauma teams responsible for the first response to patients with multiple injuries upon arrival in a hospital consist of medical specialists or resident physicians. We hypothesized that 24-hour video registration in the trauma room would allow for precise evaluation of team functioning and deviations from Advanced Trauma Life Support (ATLS) protocols. We analyzed all video registrations of trauma patients who visited the emergency room of a Level I trauma center in the Netherlands between September 1, 2000, and September 1, 2002. Analysis was performed with a score list based on ATLS protocols. From a total of 1,256 trauma room presentations, we found a total of 387 video registrations suitable for analysis. The majority of patients had an injury severity score lower than 17 (264 patients), whereas 123 patients were classified as multiple injuries (injury severity score >or=17). Errors in team organization (omission of prehospital report, no evident leadership, unorganized resuscitation, not working according to protocol, and no continued supervision of the patient) lead to significantly more deviations in the treatment than when team organization was uncomplicated. Video registration of diagnostic and therapeutic procedures by a multidisciplinary trauma team facilitates an accurate analysis of possible deviations from protocol. In addition to identifying technical errors, the role of the team leader can clearly be analyzed and related to team actions. Registration strongly depends on availability of video tapes, timely started registration, and hardware functioning. The results from this study were used to develop a training program for trauma teams in our hospital that specifically focuses on the team leader's functioning.

  13. Videos for Science Communication and Nature Interpretation: The TIB|AV-Portal as Resource.

    NASA Astrophysics Data System (ADS)

    Marín Arraiza, Paloma; Plank, Margret; Löwe, Peter

    2016-04-01

    Scientific audiovisual media such as videos of research, interactive displays or computer animations has become an important part of scientific communication and education. Dynamic phenomena can be described better by audiovisual media than by words and pictures. For this reason, scientific videos help us to understand and discuss environmental phenomena more efficiently. Moreover, the creation of scientific videos is easier than ever, thanks to mobile devices and open source editing software. Video-clips, webinars or even the interactive part of a PICO are formats of scientific audiovisual media used in the Geosciences. This type of media translates the location-referenced Science Communication such as environmental interpretation into computed-based Science Communication. A new way of Science Communication is video abstracting. A video abstract is a three- to five-minute video statement that provides background information about a research paper. It also gives authors the opportunity to present their research activities to a wider audience. Since this kind of media have become an important part of scientific communication there is a need for reliable infrastructures which are capable of managing the digital assets researchers generate. Using the reference of the usecase of video abstracts this paper gives an overview over the activities by the German National Library of Science and Technology (TIB) regarding publishing and linking audiovisual media in a scientifically sound way. The German National Library of Science and Technology (TIB) in cooperation with the Hasso Plattner Institute (HPI) developed a web-based portal (av.tib.eu) that optimises access to scientific videos in the fields of science and technology. Videos from the realms of science and technology can easily be uploaded onto the TIB|AV Portal. Within a short period of time the videos are assigned a digital object identifier (DOI). This enables them to be referenced, cited, and linked (e.g. to the relevant article or further supplement materials). By using media fragment identifiers not only the whole video can be cited, but also individual parts of it. Doing so, users are also likely to find high-quality related content (for instance, a video abstract and the corresponding article or an expedition documentary and its field notebook). Based on automatic analysis of speech, images and texts within the videos a large amount of metadata associated with the segments of the video is automatically generated. These metadata enhance the searchability of the video and make it easier to retrieve and interlink meaningful parts of the video. This new and reliable library-driven infrastructure allow all different types of data be discoverable, accessible, citable, freely reusable, and interlinked. Therefore, it simplifies Science Communication

  14. Video segmentation for post-production

    NASA Astrophysics Data System (ADS)

    Wills, Ciaran

    2001-12-01

    Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.

  15. Links between Characteristics of Collaborative Peer Video Analysis Events and Literacy Teachers' Outcomes

    ERIC Educational Resources Information Center

    Arya, Poonam; Christ, Tanya; Chiu, Ming

    2015-01-01

    This study examined how characteristics of Collaborative Peer Video Analysis (CPVA) events are related to teachers' pedagogical outcomes. Data included 39 transcribed literacy video events, in which 14 in-service teachers engaged in discussions of their video clips. Emergent coding and Statistical Discourse Analysis were used to analyze the data.…

  16. A randomized controlled study to evaluate the role of video-based coaching in training laparoscopic skills.

    PubMed

    Singh, Pritam; Aggarwal, Rajesh; Tahir, Muaaz; Pucher, Philip H; Darzi, Ara

    2015-05-01

    This study evaluates whether video-based coaching can enhance laparoscopic surgical skills performance. Many professions utilize coaching to improve performance. The sports industry employs video analysis to maximize improvement from every performance. Laparoscopic novices were baseline tested and then trained on a validated virtual reality (VR) laparoscopic cholecystectomy (LC) curriculum. After competence, subjects were randomized on a 1:1 ratio and each performed 5 VRLCs. After each LC, intervention group subjects received video-based coaching by a surgeon, utilizing an adaptation of the GROW (Goals, Reality, Options, Wrap-up) coaching model. Control subjects viewed online surgical lectures. All subjects then performed 2 porcine LCs. Performance was assessed by blinded video review using validated global rating scales. Twenty subjects were recruited. No significant differences were observed between groups in baseline performance and in VRLC1. For each subsequent repetition, intervention subjects significantly outperformed controls on all global rating scales. Interventions outperformed controls in porcine LC1 [Global Operative Assessment of Laparoscopic Skills: (20.5 vs 15.5; P = 0.011), Objective Structured Assessment of Technical Skills: (21.5vs 14.5; P = 0.001), and Operative Performance Rating System: (26 vs 19.5; P = 0.001)] and porcine LC2 [Global Operative Assessment of Laparoscopic Skills: (28 vs 17.5; P = 0.005), Objective Structured Assessment of Technical Skills: (30 vs 16.5; P < 0.001), and Operative Performance Rating System: (36 vs 21; P = 0.004)]. Intervention subjects took significantly longer than controls in porcine LC1 (2920 vs 2004 seconds; P = 0.009) and LC2 (2297 vs 1683; P = 0.003). Despite equivalent exposure to practical laparoscopic skills training, video-based coaching enhanced the quality of laparoscopic surgical performance on both VR and porcine LCs, although at the expense of increased time. Video-based coaching is a feasible method of maximizing performance enhancement from every clinical exposure.

  17. Automatic Online Lecture Highlighting Based on Multimedia Analysis

    ERIC Educational Resources Information Center

    Che, Xiaoyin; Yang, Haojin; Meinel, Christoph

    2018-01-01

    Textbook highlighting is widely considered to be beneficial for students. In this paper, we propose a comprehensive solution to highlight the online lecture videos in both sentence- and segment-level, just as is done with paper books. The solution is based on automatic analysis of multimedia lecture materials, such as speeches, transcripts, and…

  18. Phytoplankton Imaging and Analysis System: Instrumentation for Field and Laboratory Acquisition, Analysis and WWW/LAN-Based Sharing of Marine Phytoplankton Data (DURIP)

    DTIC Science & Technology

    2000-09-30

    networks (LAN), (3) quantifying size, shape, and other parameters of plankton cells and colonies via image analysis and image reconstruction, and (4) creating educational materials (e.g. lectures, videos etc.).

  19. Using Video-Based Modeling to Promote Acquisition of Fundamental Motor Skills

    ERIC Educational Resources Information Center

    Obrusnikova, Iva; Rattigan, Peter J.

    2016-01-01

    Video-based modeling is becoming increasingly popular for teaching fundamental motor skills to children in physical education. Two frequently used video-based instructional strategies that incorporate modeling are video prompting (VP) and video modeling (VM). Both strategies have been used across multiple disciplines and populations to teach a…

  20. YouTube as a potential training method for laparoscopic cholecystectomy.

    PubMed

    Lee, Jun Suh; Seo, Ho Seok; Hong, Tae Ho

    2015-08-01

    The purpose of this study was to analyze the educational quality of laparoscopic cholecystectomy (LC) videos accessible on YouTube, one of the most important sources of internet-based medical information. The keyword 'laparoscopic cholecystectomy' was used to search on YouTube and the first 100 videos were analyzed. Among them, 27 videos were excluded and 73 videos were included in the study. An arbitrary score system for video quality, devised from existing LC guidelines, were used to evaluate the quality of the videos. Video demographics were analyzed by the quality and source of the video. Correlation analysis was performed. When analyzed by video quality, 11 (15.1%) were evaluated as 'good', 40 (54.8%) were 'moderate', and 22 (30.1%) were 'poor', and there were no differences in length, views per day, or number of likes, dislikes, and comments. When analyzed by source, 27 (37.0%) were uploaded by primary centers, 20 (27.4%) by secondary centers, 15 (20.5%) by tertiary centers, 5 (6.8%) by academic institutions, and 6 (8.2%) by commercial institutions. The mean score of the tertiary center group (6.0 ± 2.0) was significantly higher than the secondary center group (3.9 ± 1.4, P = 0.001). The video score had no correlation with views per day or number of likes. Many LC videos are accessible on YouTube with varying quality. Videos uploaded by tertiary centers showed the highest educational value. This discrepancy in video quality was not recognized by viewers. More videos with higher quality need to be uploaded, and an active filtering process is necessary.

  1. Experience of parents of children with autism on YouTube: are there educationally useful videos?

    PubMed

    Azer, Samy A; Bokhari, Raghad A; AlSaleh, Ghadah S; Alabdulaaly, May M; Ateeq, Khawlah I; Guerrero, Anthony P S; Azer, Sarah

    2018-09-01

    The aims of this study were to determine the following: first, are there educationally useful videos of parents of children with autism sharing their experiences? Second, do any of the data related to videos help in identifying useful videos? And third, what do posted comments tell us? YouTube was searched for videos of parents sharing their experiences. The following parameters were collected: title, creator, URL, duration, number of viewers, likes, dislikes, comments, days on YouTube, and country. Based on agreed-upon criteria, videos were divided independently into educationally useful and non-useful categories. A critical thematic analysis of comments was conducted. A total of 180 videos were finally identified, of which 106 (59%) provided useful information, scoring 15.3 ± 0.7 (mean ± SD); 74 (41%) were determined to be not educationally useful, scoring 8.6 ± 2.1. The differences in scores were significant (p < 0.001), but there were no significant differences between the useful and non-useful groups in terms of video parameters. No correlation was found between scores and any of the videos' parameters. In conclusion, there are videos that can be used as educational resources. The videos' parameters did not differentiate between useful and non useful. Useful videos were mostly created by professional societies and by parents. The study reflects the emerging role of YouTube in sharing experiences.

  2. A habituation based approach for detection of visual changes in surveillance camera

    NASA Astrophysics Data System (ADS)

    Sha'abani, M. N. A. H.; Adan, N. F.; Sabani, M. S. M.; Abdullah, F.; Nadira, J. H. S.; Yasin, M. S. M.

    2017-09-01

    This paper investigates a habituation based approach in detecting visual changes using video surveillance systems in a passive environment. Various techniques have been introduced for dynamic environment such as motion detection, object classification and behaviour analysis. However, in a passive environment, most of the scenes recorded by the surveillance system are normal. Therefore, implementing a complex analysis all the time in the passive environment resulting on computationally expensive, especially when using a high video resolution. Thus, a mechanism of attention is required, where the system only responds to an abnormal event. This paper proposed a novelty detection mechanism in detecting visual changes and a habituation based approach to measure the level of novelty. The objective of the paper is to investigate the feasibility of the habituation based approach in detecting visual changes. Experiment results show that the approach are able to accurately detect the presence of novelty as deviations from the learned knowledge.

  3. Applying Aspects of the Expert Performance Approach to Better Understand the Structure of Skill and Mechanisms of Skill Acquisition in Video Games.

    PubMed

    Boot, Walter R; Sumner, Anna; Towne, Tyler J; Rodriguez, Paola; Anders Ericsson, K

    2017-04-01

    Video games are ideal platforms for the study of skill acquisition for a variety of reasons. However, our understanding of the development of skill and the cognitive representations that support skilled performance can be limited by a focus on game scores. We present an alternative approach to the study of skill acquisition in video games based on the tools of the Expert Performance Approach. Our investigation was motivated by a detailed analysis of the behaviors responsible for the superior performance of one of the highest scoring players of the video game Space Fortress (Towne, Boot, & Ericsson, ). This analysis revealed how certain behaviors contributed to his exceptional performance. In this study, we recruited a participant for a similar training regimen, but we collected concurrent and retrospective verbal protocol data throughout training. Protocol analysis revealed insights into strategies, errors, mental representations, and shifting game priorities. We argue that these insights into the developing representations that guided skilled performance could only easily have been derived from the tools of the Expert Performance Approach. We propose that the described approach could be applied to understand performance and skill acquisition in many different video games (and other short- to medium-term skill acquisition paradigms) and help reveal mechanisms of transfer from gameplay to other measures of laboratory and real-world performance. Copyright © 2016 Cognitive Science Society, Inc.

  4. Identification and annotation of erotic film based on content analysis

    NASA Astrophysics Data System (ADS)

    Wang, Donghui; Zhu, Miaoliang; Yuan, Xin; Qian, Hui

    2005-02-01

    The paper brings forward a new method for identifying and annotating erotic films based on content analysis. First, the film is decomposed to video and audio stream. Then, the video stream is segmented into shots and key frames are extracted from each shot. We filter the shots that include potential erotic content by finding the nude human body in key frames. A Gaussian model in YCbCr color space for detecting skin region is presented. An external polygon that covered the skin regions is used for the approximation of the human body. Last, we give the degree of the nudity by calculating the ratio of skin area to whole body area with weighted parameters. The result of the experiment shows the effectiveness of our method.

  5. Syntax-directed content analysis of videotext: application to a map detection recognition system

    NASA Astrophysics Data System (ADS)

    Aradhye, Hrishikesh; Herson, James A.; Myers, Gregory

    2003-01-01

    Video is an increasingly important and ever-growing source of information to the intelligence and homeland defense analyst. A capability to automatically identify the contents of video imagery would enable the analyst to index relevant foreign and domestic news videos in a convenient and meaningful way. To this end, the proposed system aims to help determine the geographic focus of a news story directly from video imagery by detecting and geographically localizing political maps from news broadcasts, using the results of videotext recognition in lieu of a computationally expensive, scale-independent shape recognizer. Our novel method for the geographic localization of a map is based on the premise that the relative placement of text superimposed on a map roughly corresponds to the geographic coordinates of the locations the text represents. Our scheme extracts and recognizes videotext, and iteratively identifies the geographic area, while allowing for OCR errors and artistic freedom. The fast and reliable recognition of such maps by our system may provide valuable context and supporting evidence for other sources, such as speech recognition transcripts. The concepts of syntax-directed content analysis of videotext presented here can be extended to other content analysis systems.

  6. Analysis of Soot Propensity in Combustion Processes Using Optical Sensors and Video Magnification.

    PubMed

    Garcés, Hugo O; Fuentes, Andrés; Reszka, Pedro; Carvajal, Gonzalo

    2018-05-11

    Industrial combustion processes are an important source of particulate matter, causing significant pollution problems that affect human health, and are a major contributor to global warming. The most common method for analyzing the soot emission propensity in flames is the Smoke Point Height (SPH) analysis, which relates the fuel flow rate to a critical flame height at which soot particles begin to leave the reactive zone through the tip of the flame. The SPH and is marked by morphological changes on the flame tip. SPH analysis is normally done through flame observations with the naked eye, leading to high bias. Other techniques are more accurate, but are not practical to implement in industrial settings, such as the Line Of Sight Attenuation (LOSA), which obtains soot volume fractions within the flame from the attenuation of a laser beam. We propose the use of Video Magnification techniques to detect the flame morphological changes and thus determine the SPH minimizing observation bias. We have applied for the first time Eulerian Video Magnification (EVM) and Phase-based Video Magnification (PVM) on an ethylene laminar diffusion flame. The results were compared with LOSA measurements, and indicate that EVM is the most accurate method for SPH determination.

  7. Low-cost Tools for Aerial Video Geolocation and Air Traffic Analysis for Delay Reduction Using Google Earth

    NASA Astrophysics Data System (ADS)

    Zetterlind, V.; Pledgie, S.

    2009-12-01

    Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.

  8. Typology of delivery quality: latent profile analysis of teacher engagement and delivery techniques in a school-based prevention intervention, keepin’ it REAL curriculum

    PubMed Central

    Shin, YoungJu; Miller-Day, Michelle; Pettigrew, Jonathan; Hecht, Michael L.; Krieger, Janice L.

    2014-01-01

    Enhancing the delivery quality of school-based, evidence-based prevention programs is one key to ensuring uniform program effects on student outcomes. Program evaluations often focus on content dosage when implementing prevention curricula, however, less is known about implementation quality of prevention content, especially among teachers who may or may not have a prevention background. The goal of the current study is to add to the scholarly literature on implementation quality for a school-based substance use prevention intervention. Twenty-five schools in Ohio and Pennsylvania implemented the original keepin’ REAL (kiR) substance use prevention curriculum. Each of the 10, 40–45 min lessons of the kiR curriculum was video recorded. Coders observed and rated a random sample of 276 videos reflecting 78 classes taught by 31 teachers. Codes included teachers’ delivery techniques (e.g. lecture, discussion, demonstration and role play) and engagement with students (e.g. attentiveness, enthusiasm and positivity). Based on the video ratings, a latent profile analysis was run to identify typology of delivery quality. Five profiles were identified: holistic approach, attentive teacher-orientated approach, enthusiastic lecture approach, engaged interactive learning approach and skill practice-only approach. This study provides a descriptive typology of delivery quality while implementing a school-based substance use prevention intervention. PMID:25274721

  9. ESTABLISHING VERBAL REPERTOIRES IN CHILDREN WITH AUTISM USING FUNCTION-BASED VIDEO MODELING

    PubMed Central

    Plavnick, Joshua B; Ferreri, Summer J

    2011-01-01

    Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative responses in the natural environment. An alternating treatments design was used during Experiment 1 to identify the variables that were functionally related to gestures emitted by 4 children with autism. Results showed that gestures functioned as requests for attention for 1 participant and as requests for assistance to obtain a preferred item or event for 3 participants. Video modeling was used during Experiment 2 to compare mand acquisition when video sequences were either related or unrelated to the results of the functional analysis. An alternating treatments within multiple probe design showed that participants repeatedly acquired mands during the function-based condition but not during the nonfunction-based condition. In addition, generalization of the response was observed during the former but not the latter condition. PMID:22219527

  10. Establishing verbal repertoires in children with autism using function-based video modeling.

    PubMed

    Plavnick, Joshua B; Ferreri, Summer J

    2011-01-01

    Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative responses in the natural environment. An alternating treatments design was used during Experiment 1 to identify the variables that were functionally related to gestures emitted by 4 children with autism. Results showed that gestures functioned as requests for attention for 1 participant and as requests for assistance to obtain a preferred item or event for 3 participants. Video modeling was used during Experiment 2 to compare mand acquisition when video sequences were either related or unrelated to the results of the functional analysis. An alternating treatments within multiple probe design showed that participants repeatedly acquired mands during the function-based condition but not during the nonfunction-based condition. In addition, generalization of the response was observed during the former but not the latter condition.

  11. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.

  12. Phase-based motion magnification video for monitoring of vital signals using the Hermite transform

    NASA Astrophysics Data System (ADS)

    Brieva, Jorge; Moya-Albor, Ernesto

    2017-11-01

    In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.

  13. Localizing wushu players on a platform based on a video recording

    NASA Astrophysics Data System (ADS)

    Peczek, Piotr M.; Zabołotny, Wojciech M.

    2017-08-01

    This article describes the development of a method to localize an athlete during sports performance on a platform, based on a static video recording. Considered sport for this method is wushu - martial art. However, any other discipline can be applied. There are specified requirements, and 2 algorithms of image processing are described. The next part presents an experiment that was held based on recordings from the Pan American Wushu Championship. Based on those recordings the steps of the algorithm are shown. Results are evaluated manually. The last part of the article concludes if the algorithm is applicable and what improvements have to be implemented to use it during sports competitions as well as for offline analysis.

  14. Evaluation of educational content of YouTube videos relating to neurogenic bladder and intermittent catheterization.

    PubMed

    Ho, Matthew; Stothers, Lynn; Lazare, Darren; Tsang, Brian; Macnab, Andrew

    2015-01-01

    Many patients conduct internet searches to manage their own health problems, to decide if they need professional help, and to corroborate information given in a clinical encounter. Good information can improve patients' understanding of their condition and their self-efficacy. Patients with spinal cord injury (SCI) featuring neurogenic bladder (NB) require knowledge and skills related to their condition and need for intermittent catheterization (IC). Information quality was evaluated in videos accessed via YouTube relating to NB and IC using search terms "neurogenic bladder intermittent catheter" and "spinal cord injury intermittent catheter." Video content was independently rated by 3 investigators using criteria based on European Urological Association (EAU) guidelines and established clinical practice. In total, 71 videos met the inclusion criteria. Of these, 12 (17%) addressed IC and 50 (70%) contained information on NB. The remaining videos met inclusion criteria, but did not contain information relevant to either IC or NB. Analysis indicated poor overall quality of information, with some videos with information contradictory to EAU guidelines for IC. High-quality videos were randomly distributed by YouTube. IC videos featuring a healthcare narrator scored significantly higher than patient-narrated videos, but not higher than videos with a merchant narrator. About half of the videos contained commercial content. Some good-quality educational videos about NB and IC are available on YouTube, but most are poor. The videos deemed good quality were not prominently ranked by the YouTube search algorithm, consequently user access is less likely. Study limitations include the limit of 50 videos per category and the use of a de novo rating tool. Information quality in videos with healthcare narrators was not higher than in those featuring merchant narrators. Better material is required to improve patients' understanding of their condition.

  15. Integrated microfluidic technology for sub-lethal and behavioral marine ecotoxicity biotests

    NASA Astrophysics Data System (ADS)

    Huang, Yushi; Reyes Aldasoro, Constantino Carlos; Persoone, Guido; Wlodkowic, Donald

    2015-06-01

    Changes in behavioral traits exhibited by small aquatic invertebrates are increasingly postulated as ethically acceptable and more sensitive endpoints for detection of water-born ecotoxicity than conventional mortality assays. Despite importance of such behavioral biotests, their implementation is profoundly limited by the lack of appropriate biocompatible automation, integrated optoelectronic sensors, and the associated electronics and analysis algorithms. This work outlines development of a proof-of-concept miniaturized Lab-on-a-Chip (LOC) platform for rapid water toxicity tests based on changes in swimming patterns exhibited by Artemia franciscana (Artoxkit M™) nauplii. In contrast to conventionally performed end-point analysis based on counting numbers of dead/immobile specimens we performed a time-resolved video data analysis to dynamically assess impact of a reference toxicant on swimming pattern of A. franciscana. Our system design combined: (i) innovative microfluidic device keeping free swimming Artemia sp. nauplii under continuous microperfusion as a mean of toxin delivery; (ii) mechatronic interface for user-friendly fluidic actuation of the chip; and (iii) miniaturized video acquisition for movement analysis of test specimens. The system was capable of performing fully programmable time-lapse and video-microscopy of multiple samples for rapid ecotoxicity analysis. It enabled development of a user-friendly and inexpensive test protocol to dynamically detect sub-lethal behavioral end-points such as changes in speed of movement or distance traveled by each animal.

  16. Problem-based learning using patient-simulated videos showing daily life for a comprehensive clinical approach

    PubMed Central

    Ohira, Yoshiyuki; Uehara, Takanori; Noda, Kazutaka; Suzuki, Shingo; Shikino, Kiyoshi; Kajiwara, Hideki; Kondo, Takeshi; Hirota, Yusuke; Ikusaka, Masatomi

    2017-01-01

    Objectives We examined whether problem-based learning tutorials using patient-simulated videos showing daily life are more practical for clinical learning, compared with traditional paper-based problem-based learning, for the consideration rate of psychosocial issues and the recall rate for experienced learning. Methods Twenty-two groups with 120 fifth-year students were each assigned paper-based problem-based learning and video-based problem-based learning using patient-simulated videos. We compared target achievement rates in questionnaires using the Wilcoxon signed-rank test and discussion contents diversity using the Mann-Whitney U test. A follow-up survey used a chi-square test to measure students’ recall of cases in three categories: video, paper, and non-experienced. Results Video-based problem-based learning displayed significantly higher achievement rates for imagining authentic patients (p=0.001), incorporating a comprehensive approach including psychosocial aspects (p<0.001), and satisfaction with sessions (p=0.001). No significant differences existed in the discussion contents diversity regarding the International Classification of Primary Care Second Edition codes and chapter types or in the rate of psychological codes. In a follow-up survey comparing video and paper groups to non-experienced groups, the rates were higher for video (χ2=24.319, p<0.001) and paper (χ2=11.134, p=0.001). Although the video rate tended to be higher than the paper rate, no significant difference was found between the two. Conclusions Patient-simulated videos showing daily life facilitate imagining true patients and support a comprehensive approach that fosters better memory. The clinical patient-simulated video method is more practical and clinical problem-based tutorials can be implemented if we create patient-simulated videos for each symptom as teaching materials.  PMID:28245193

  17. Problem-based learning using patient-simulated videos showing daily life for a comprehensive clinical approach.

    PubMed

    Ikegami, Akiko; Ohira, Yoshiyuki; Uehara, Takanori; Noda, Kazutaka; Suzuki, Shingo; Shikino, Kiyoshi; Kajiwara, Hideki; Kondo, Takeshi; Hirota, Yusuke; Ikusaka, Masatomi

    2017-02-27

    We examined whether problem-based learning tutorials using patient-simulated videos showing daily life are more practical for clinical learning, compared with traditional paper-based problem-based learning, for the consideration rate of psychosocial issues and the recall rate for experienced learning. Twenty-two groups with 120 fifth-year students were each assigned paper-based problem-based learning and video-based problem-based learning using patient-simulated videos. We compared target achievement rates in questionnaires using the Wilcoxon signed-rank test and discussion contents diversity using the Mann-Whitney U test. A follow-up survey used a chi-square test to measure students' recall of cases in three categories: video, paper, and non-experienced. Video-based problem-based learning displayed significantly higher achievement rates for imagining authentic patients (p=0.001), incorporating a comprehensive approach including psychosocial aspects (p<0.001), and satisfaction with sessions (p=0.001). No significant differences existed in the discussion contents diversity regarding the International Classification of Primary Care Second Edition codes and chapter types or in the rate of psychological codes. In a follow-up survey comparing video and paper groups to non-experienced groups, the rates were higher for video (χ 2 =24.319, p<0.001) and paper (χ 2 =11.134, p=0.001). Although the video rate tended to be higher than the paper rate, no significant difference was found between the two. Patient-simulated videos showing daily life facilitate imagining true patients and support a comprehensive approach that fosters better memory. The clinical patient-simulated video method is more practical and clinical problem-based tutorials can be implemented if we create patient-simulated videos for each symptom as teaching materials.

  18. Procedures and compliance of a video modeling applied behavior analysis intervention for Brazilian parents of children with autism spectrum disorders.

    PubMed

    Bagaiolo, Leila F; Mari, Jair de J; Bordini, Daniela; Ribeiro, Tatiane C; Martone, Maria Carolina C; Caetano, Sheila C; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S

    2017-07-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum disorder children, (2) to describe a low-cost parental training intervention, and (3) to assess participant's compliance. This is a descriptive study of a clinical trial for autism spectrum disorder children. The parental training intervention was delivered over 22 weeks based on video modeling. Parents with at least 8 years of schooling with an autism spectrum disorder child between 3 and 6 years old with an IQ lower than 70 were invited to participate. A total of 67 parents fulfilled the study criteria and were randomized into two groups: 34 as the intervention and 33 as controls. In all, 14 videos were recorded covering management of disruptive behaviors, prompting hierarchy, preference assessment, and acquisition of better eye contact and joint attention. Compliance varied as follows: good 32.4%, reasonable 38.2%, low 5.9%, and 23.5% with no compliance. Video modeling parental training seems a promising, feasible, and low-cost way to deliver care for children with autism spectrum disorder, particularly for populations with scarce treatment resources.

  19. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    NASA Astrophysics Data System (ADS)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  20. C U L8ter: YouTube distracted driving PSAs use of behavior change theory.

    PubMed

    Steadman, Mindy; Chao, Melanie S; Strong, Jessica T; Maxwell, Martha; West, Joshua H

    2014-01-01

    To examine the inclusion of health behavior theory in distracted driving PSAs on YouTube.com. Two-hundred fifty PSAs were assessed using constructs from 4 prominent health behavior theories. A total theory score was calculated for each video. Multiple regression analysis was used to identify factors associated with higher theory scores. PSAs were generally lacking in theoretical content. Video length, use of rates/statistics, driving scenario depiction, and presence of a celebrity were positively associated with theory inclusion. Collaboration between health experts and PSA creators could be fostered to produce more theory-based distracted driving videos on YouTube.com.

  1. Automated Video-Based Traffic Count Analysis.

    DOT National Transportation Integrated Search

    2016-01-01

    The goal of this effort has been to develop techniques that could be applied to the : detection and tracking of vehicles in overhead footage of intersections. To that end we : have developed and published techniques for vehicle tracking based on dete...

  2. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    NASA Astrophysics Data System (ADS)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.

  3. Relationships among video gaming proficiency and spatial orientation, laparoscopic, and traditional surgical skills of third-year veterinary students.

    PubMed

    Millard, Heather A Towle; Millard, Ralph P; Constable, Peter D; Freeman, Lyn J

    2014-02-01

    To determine the relationships among traditional and laparoscopic surgical skills, spatial analysis skills, and video gaming proficiency of third-year veterinary students. Prospective, randomized, controlled study. A convenience sample of 29 third-year veterinary students. The students had completed basic surgical skills training with inanimate objects but had no experience with soft tissue, orthopedic, or laparoscopic surgery; the spatial analysis test; or the video games that were used in the study. Scores for traditional surgical, laparoscopic, spatial analysis, and video gaming skills were determined, and associations among these were analyzed by means of Spearman's rank order correlation coefficient (rs). A significant positive association (rs = 0.40) was detected between summary scores for video game performance and laparoscopic skills, but not between video game performance and traditional surgical skills scores. Spatial analysis scores were positively (rs = 0.30) associated with video game performance scores; however, that result was not significant. Spatial analysis scores were not significantly associated with laparoscopic surgical skills scores. Traditional surgical skills scores were not significantly associated with laparoscopic skills or spatial analysis scores. Results of this study indicated video game performance of third-year veterinary students was predictive of laparoscopic but not traditional surgical skills, suggesting that laparoscopic performance may be improved with video gaming experience. Additional studies would be required to identify methods for improvement of traditional surgical skills.

  4. Students' Learning Experiences from Didactic Teaching Sessions Including Patient Case Examples as Either Text or Video: A Qualitative Study.

    PubMed

    Pedersen, Kamilla; Moeller, Martin Holdgaard; Paltved, Charlotte; Mors, Ole; Ringsted, Charlotte; Morcke, Anne Mette

    2017-10-06

    The aim of this study was to explore medical students' learning experiences from the didactic teaching formats using either text-based patient cases or video-based patient cases with similar content. The authors explored how the two different patient case formats influenced students' perceptions of psychiatric patients and students' reflections on meeting and communicating with psychiatric patients. The authors conducted group interviews with 30 medical students who volunteered to participate in interviews and applied inductive thematic content analysis to the transcribed interviews. Students taught with text-based patient cases emphasized excitement and drama towards the personal clinical narratives presented by the teachers during the course, but never referred to the patient cases. Authority and boundary setting were regarded as important in managing patients. Students taught with video-based patient cases, in contrast, often referred to the patient cases when highlighting new insights, including the importance of patient perspectives when communicating with patients. The format of patient cases included in teaching may have a substantial impact on students' patient-centeredness. Video-based patient cases are probably more effective than text-based patient cases in fostering patient-centered perspectives in medical students. Teachers sharing stories from their own clinical experiences stimulates both engagement and excitement, but may also provoke unintended stigma and influence an authoritative approach in medical students towards managing patients in clinical psychiatry.

  5. The LivePhoto Physics videos and video analysis site

    NASA Astrophysics Data System (ADS)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  6. Digital Game-Based Learning for K-12 Mathematics Education: A Meta-Analysis

    ERIC Educational Resources Information Center

    Byun, JaeHwan; Joung, Eunmi

    2018-01-01

    Digital games (e.g., video games or computer games) have been reported as an effective educational method that can improve students' motivation and performance in mathematics education. This meta-analysis study (a) investigates the current trend of digital game-based learning (DGBL) by reviewing the research studies on the use of DGBL for…

  7. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  8. Facilitation and Teacher Behaviors: An Analysis of Literacy Teachers' Video-Case Discussions

    ERIC Educational Resources Information Center

    Arya, Poonam; Christ, Tanya; Chiu, Ming Ming

    2014-01-01

    This study explored how peer and professor facilitations are related to teachers' behaviors during video-case discussions. Fourteen inservice teachers produced 1,787 turns of conversation during 12 video-case discussions that were video-recorded, transcribed, coded, and analyzed with statistical discourse analysis. Professor facilitations (sharing…

  9. Recognising safety critical events: can automatic video processing improve naturalistic data analyses?

    PubMed

    Dozza, Marco; González, Nieves Pañeda

    2013-11-01

    New trends in research on traffic accidents include Naturalistic Driving Studies (NDS). NDS are based on large scale data collection of driver, vehicle, and environment information in real world. NDS data sets have proven to be extremely valuable for the analysis of safety critical events such as crashes and near crashes. However, finding safety critical events in NDS data is often difficult and time consuming. Safety critical events are currently identified using kinematic triggers, for instance searching for deceleration below a certain threshold signifying harsh braking. Due to the low sensitivity and specificity of this filtering procedure, manual review of video data is currently necessary to decide whether the events identified by the triggers are actually safety critical. Such reviewing procedure is based on subjective decisions, is expensive and time consuming, and often tedious for the analysts. Furthermore, since NDS data is exponentially growing over time, this reviewing procedure may not be viable anymore in the very near future. This study tested the hypothesis that automatic processing of driver video information could increase the correct classification of safety critical events from kinematic triggers in naturalistic driving data. Review of about 400 video sequences recorded from the events, collected by 100 Volvo cars in the euroFOT project, suggested that drivers' individual reaction may be the key to recognize safety critical events. In fact, whether an event is safety critical or not often depends on the individual driver. A few algorithms, able to automatically classify driver reaction from video data, have been compared. The results presented in this paper show that the state of the art subjective review procedures to identify safety critical events from NDS can benefit from automated objective video processing. In addition, this paper discusses the major challenges in making such video analysis viable for future NDS and new potential applications for NDS video processing. As new NDS such as SHRP2 are now providing the equivalent of five years of one vehicle data each day, the development of new methods, such as the one proposed in this paper, seems necessary to guarantee that these data can actually be analysed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.

  11. Video tracking analysis of behavioral patterns during estrus in goats

    PubMed Central

    ENDO, Natsumi; RAHAYU, Larasati Puji; ARAKAWA, Toshiya; TANAKA, Tomomi

    2015-01-01

    Here, we report a new method for measuring behavioral patterns during estrus in goats based on video tracking analysis. Data were collected from cycling goats, which were in estrus (n = 8) or not in estrus (n = 8). An observation pen (2.5 m × 2.5 m) was set up in the corner of the female paddock with one side adjacent to a male paddock. The positions and movements of goats were tracked every 0.5 sec for 10 min by using a video tracking software, and the trajectory data were used for the analysis. There were no significant differences in the durations of standing and walking or the total length of movement. However, the number of approaches to a male and the duration of staying near the male were higher in goats in estrus than in goats not in estrus. The proposed evaluation method may be suitable for detailed monitoring of behavioral changes during estrus in goats. PMID:26560676

  12. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  13. YouTube as a potential training method for laparoscopic cholecystectomy

    PubMed Central

    Lee, Jun Suh; Seo, Ho Seok

    2015-01-01

    Purpose The purpose of this study was to analyze the educational quality of laparoscopic cholecystectomy (LC) videos accessible on YouTube, one of the most important sources of internet-based medical information. Methods The keyword 'laparoscopic cholecystectomy' was used to search on YouTube and the first 100 videos were analyzed. Among them, 27 videos were excluded and 73 videos were included in the study. An arbitrary score system for video quality, devised from existing LC guidelines, were used to evaluate the quality of the videos. Video demographics were analyzed by the quality and source of the video. Correlation analysis was performed. Results When analyzed by video quality, 11 (15.1%) were evaluated as 'good', 40 (54.8%) were 'moderate', and 22 (30.1%) were 'poor', and there were no differences in length, views per day, or number of likes, dislikes, and comments. When analyzed by source, 27 (37.0%) were uploaded by primary centers, 20 (27.4%) by secondary centers, 15 (20.5%) by tertiary centers, 5 (6.8%) by academic institutions, and 6 (8.2%) by commercial institutions. The mean score of the tertiary center group (6.0 ± 2.0) was significantly higher than the secondary center group (3.9 ± 1.4, P = 0.001). The video score had no correlation with views per day or number of likes. Conclusion Many LC videos are accessible on YouTube with varying quality. Videos uploaded by tertiary centers showed the highest educational value. This discrepancy in video quality was not recognized by viewers. More videos with higher quality need to be uploaded, and an active filtering process is necessary. PMID:26236699

  14. Facial Video-Based Photoplethysmography to Detect HRV at Rest.

    PubMed

    Moreno, J; Ramos-Castro, J; Movellan, J; Parrado, E; Rodas, G; Capdevila, L

    2015-06-01

    Our aim is to demonstrate the usefulness of photoplethysmography (PPG) for analyzing heart rate variability (HRV) using a standard 5-min test at rest with paced breathing, comparing the results with real RR intervals and testing supine and sitting positions. Simultaneous recordings of R-R intervals were conducted with a Polar system and a non-contact PPG, based on facial video recording on 20 individuals. Data analysis and editing were performed with individually designated software for each instrument. Agreement on HRV parameters was assessed with concordance correlations, effect size from ANOVA and Bland and Altman plots. For supine position, differences between video and Polar systems showed a small effect size in most HRV parameters. For sitting position, these differences showed a moderate effect size in most HRV parameters. A new procedure, based on the pixels that contained more heart beat information, is proposed for improving the signal-to-noise ratio in the PPG video signal. Results were acceptable in both positions but better in the supine position. Our approach could be relevant for applications that require monitoring of stress or cardio-respiratory health, such as effort/recuperation states in sports. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  16. Collaborative video caching scheme over OFDM-based long-reach passive optical networks

    NASA Astrophysics Data System (ADS)

    Li, Yan; Dai, Shifang; Chang, Xiangmao

    2018-07-01

    Long-reach passive optical networks (LR-PONs) are now considered as a desirable access solution for cost-efficiently delivering broadband services by integrating metro network with access network, among which orthogonal frequency division multiplexing (OFDM)-based LR-PONs gain greater research interests due to their good robustness and high spectrum efficiency. In such attractive OFDM-based LR-PONs, however, it is still challenging to effectively provide video service, which is one of the most popular and profitable broadband services, for end users. Given that more video requesters (i.e., end users) far away from optical line terminal (OLT) are served in OFDM-based LR-PONs, it is efficiency-prohibitive to use traditional video delivery model, which relies on the OLT to transmit videos to requesters, for providing video service, due to the model will incur not only larger video playback delay but also higher downstream bandwidth consumption. In this paper, we propose a novel video caching scheme that to collaboratively cache videos on distributed optical network units (ONUs) which are closer to end users, and thus to timely and cost-efficiently provide videos for requesters by ONUs over OFDM-based LR-PONs. We firstly construct an OFDM-based LR-PON architecture to enable the cooperation among ONUs while caching videos. Given a limited storage capacity of each ONU, we then propose collaborative approaches to cache videos on ONUs with the aim to maximize the local video hit ratio (LVHR), i.e., the proportion of video requests that can be directly satisfied by ONUs, under diverse resources requirements and requests distributions of videos. Simulations are finally conducted to evaluate the efficiency of our proposed scheme.

  17. An investigation into online videos as a source of safety hazard reports.

    PubMed

    Nasri, Leila; Baghersad, Milad; Gruss, Richard; Marucchi, Nico Sung Won; Abrahams, Alan S; Ehsani, Johnathon P

    2018-06-01

    Despite the advantages of video-based product reviews relative to text-based reviews in detecting possible safety hazard issues, video-based product reviews have received no attention in prior literature. This study focuses on online video-based product reviews as possible sources to detect safety hazards. We use two common text mining methods - sentiment and smoke words - to detect safety issues mentioned in videos on the world's most popular video sharing platform, YouTube. 15,402 product review videos from YouTube were identified as containing either negative sentiment or smoke words, and were carefully manually viewed to verify whether hazards were indeed mentioned. 496 true safety issues (3.2%) were found. Out of 9,453 videos that contained smoke words, 322 (3.4%) mentioned safety issues, vs. only 174 (2.9%) of the 5,949 videos with negative sentiment words. Only 1% of randomly-selected videos mentioned safety hazards. Comparing the number of videos with true safety issues that contain sentiment words vs. smoke words in their title or description, we show that smoke words are a more accurate predictor of safety hazards in video-based product reviews than sentiment words. This research also discovers words that are indicative of true hazards versus false positives in online video-based product reviews. Practical applications: The smoke words lists and word sub-groups generated in this paper can be used by manufacturers and consumer product safety organizations to more efficiently identify product safety issues from online videos. This project also provides realistic baselines for resource estimates for future projects that aim to discover safety issues from online videos or reviews. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  18. Consumer-based technology for distribution of surgical videos for objective evaluation.

    PubMed

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  19. An analysis of automatic human detection and tracking

    NASA Astrophysics Data System (ADS)

    Demuth, Philipe R.; Cosmo, Daniel L.; Ciarelli, Patrick M.

    2015-12-01

    This paper presents an automatic method to detect and follow people on video streams. This method uses two techniques to determine the initial position of the person at the beginning of the video file: one based on optical flow and the other one based on Histogram of Oriented Gradients (HOG). After defining the initial bounding box, tracking is done using four different trackers: Median Flow tracker, TLD tracker, Mean Shift tracker and a modified version of the Mean Shift tracker using HSV color space. The results of the methods presented in this paper are then compared at the end of the paper.

  20. Efficient region-based approach for blotch detection in archived video using texture information

    NASA Astrophysics Data System (ADS)

    Yous, Hamza; Serir, Amina

    2017-03-01

    We propose a method for blotch detection in archived videos by modeling their spatiotemporal properties. We introduce an adaptive spatiotemporal segmentation to extract candidate regions that can be classified as blotches. Then, the similarity between the preselected regions and their corresponding motion-compensated regions in the adjacent frames is assessed by means of motion trajectory estimation and textural information analysis. Perceived ground truth based on just noticeable contrast is employed for the evaluation of our approach against the state-of-the-art, and the reported results show a better performance for our approach.

  1. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  2. A low delay transmission method of multi-channel video based on FPGA

    NASA Astrophysics Data System (ADS)

    Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei

    2018-03-01

    In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.

  3. Direct ophthalmoscopy on YouTube: analysis of instructional YouTube videos' content and approach to visualization.

    PubMed

    Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif

    2016-01-01

    Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman's correlation. We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8-14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman's ρ=0.53; P=0.029). Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner's view, and give particular emphasis on fundus examination.

  4. The good, the bad and the ugly: a meta-analytic review of positive and negative effects of violent video games.

    PubMed

    Ferguson, Christopher John

    2007-12-01

    Video game violence has become a highly politicized issue for scientists and the general public. There is continuing concern that playing violent video games may increase the risk of aggression in players. Less often discussed is the possibility that playing violent video games may promote certain positive developments, particularly related to visuospatial cognition. The objective of the current article was to conduct a meta-analytic review of studies that examine the impact of violent video games on both aggressive behavior and visuospatial cognition in order to understand the full impact of such games. A detailed literature search was used to identify peer-reviewed articles addressing violent video game effects. Effect sizes r (a common measure of effect size based on the correlational coefficient) were calculated for all included studies. Effect sizes were adjusted for observed publication bias. Results indicated that publication bias was a problem for studies of both aggressive behavior and visuospatial cognition. Once corrected for publication bias, studies of video game violence provided no support for the hypothesis that violent video game playing is associated with higher aggression. However playing violent video games remained related to higher visuospatial cognition (r (x) = 0.36). Results from the current analysis did not support the conclusion that violent video game playing leads to aggressive behavior. However, violent video game playing was associated with higher visuospatial cognition. It may be advisable to reframe the violent video game debate in reference to potential costs and benefits of this medium.

  5. Embedded security system for multi-modal surveillance in a railway carriage

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  6. Remote video assessment for missile launch facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, G.G.; Stewart, W.A.

    1995-07-01

    The widely dispersed, unmanned launch facilities (LFs) for land-based ICBMs (intercontinental ballistic missiles) currently do not have visual assessment capability for existing intrusion alarms. The security response force currently must assess each alarm on-site. Remote assessment will enhance manpower, safety, and security efforts. Sandia National Laboratories was tasked by the USAF Electronic Systems Center to research, recommend, and demonstrate a cost-effective remote video assessment capability at missile LFs. The project`s charter was to provide: system concepts; market survey analysis; technology search recommendations; and operational hardware demonstrations for remote video assessment from a missile LF to a remote security center viamore » a cost-effective transmission medium and without using visible, on-site lighting. The technical challenges of this project were to: analyze various video transmission media and emphasize using the existing missile system copper line which can be as long as 30 miles; accentuate and extremely low-cost system because of the many sites requiring system installation; integrate the video assessment system with the current LF alarm system; and provide video assessment at the remote sites with non-visible lighting.« less

  7. Post Game Analysis: Using Video-Based Coaching for Continuous Professional Development

    PubMed Central

    Hu, Yue-Yung; Peyre, Sarah E.; Arriaga, Alexander F.; Osteen, Robert T.; Corso, Katherine A.; Weiser, Thomas G.; Swanson, Richard S.; Ashley, Stanley W.; Raut, Chandrajit P.; Zinner, Michael J.; Gawande, Atul A.; Greenberg, Caprice C.

    2011-01-01

    Background The surgical learning curve persists for years after training, yet existing CME efforts targeting this are limited. We describe a pilot study of a scalable video-based intervention, providing individualized feedback on intra-operative performance. Study Design Four complex operations performed by surgeons of varying experience – a chief resident accompanied by the operating senior surgeon, a surgeon with <10 years in practice, another with 20–30 years, and a surgeon with >30 years of experience – were video-recorded. Video playback formed the basis of 1-hour coaching sessions with a peer-judged surgical expert. These sessions were audio-recorded, transcribed, and thematically coded. Results The sessions focused on operative technique, both technical aspects and decision-making. With increasing seniority, more discussion was devoted to the optimization of teaching and facilitation of the resident’s technical performance. Coaching sessions with senior surgeons were peer-to-peer interactions, with each discussing his preferred approach. The coach alternated between directing the session (asking probing questions) and responding to specific questions brought by the surgeons, depending on learning style. At all experience levels, video review proved valuable in identifying episodes of failure-to-progress and troubleshooting alternative approaches. All agreed this tool is a powerful one. Inclusion of trainees seems most appropriate when coaching senior surgeons; it may restrict the dialogue of more junior attendings. Conclusions Video-based coaching is an educational modality that targets intra-operative judgment, technique, and teaching. Surgeons of all levels found it highly instructive. This may provide a practical, much needed approach for continuous professional development. PMID:22192924

  8. Design Effectiveness Analysis of a Media Literacy Intervention to Reduce Violent Video Games Consumption Among Adolescents: The Relevance of Lifestyles Segmentation.

    PubMed

    Rivera, Reynaldo; Santos, David; Brändle, Gaspar; Cárdaba, Miguel Ángel M

    2016-04-01

    Exposure to media violence might have detrimental effects on psychological adjustment and is associated with aggression-related attitudes and behaviors. As a result, many media literacy programs were implemented to tackle that major public health issue. However, there is little evidence about their effectiveness. Evaluating design effectiveness, particularly regarding targeting process, would prevent adverse effects and improve the evaluation of evidence-based media literacy programs. The present research examined whether or not different relational lifestyles may explain the different effects of an antiviolence intervention program. Based on relational and lifestyles theory, the authors designed a randomized controlled trial and applied an analysis of variance 2 (treatment: experimental vs. control) × 4 (lifestyle classes emerged from data using latent class analysis: communicative vs. autonomous vs. meta-reflexive vs. fractured). Seven hundred and thirty-five Italian students distributed in 47 classes participated anonymously in the research (51.3% females). Participants completed a lifestyle questionnaire as well as their attitudes and behavioral intentions as the dependent measures. The results indicated that the program was effective in changing adolescents' attitudes toward violence. However, behavioral intentions toward consumption of violent video games were moderated by lifestyles. Those with communicative relational lifestyles showed fewer intentions to consume violent video games, while a boomerang effect was found among participants with problematic lifestyles. Adolescents' lifestyles played an important role in influencing the effectiveness of an intervention aimed at changing behavioral intentions toward the consumption of violent video games. For that reason, audience lifestyle segmentation analysis should be considered an essential technique for designing, evaluating, and improving media literacy programs. © The Author(s) 2016.

  9. Evaluating YouTube as a Source of Patient Education on the Role of the Hospitalist: A Cross-Sectional Study.

    PubMed

    Hudali, Tamer; Papireddy, Muralidhar; Bhattarai, Mukul; Deckard, Alan; Hingle, Susan

    2017-01-10

    Hospital medicine is a relatively new specialty field, dedicated to the delivery of comprehensive medical care to hospitalized patients. YouTube is one of the most frequently used websites, offering access to a gamut of videos from self-produced to professionally made. The aim of our study was to determine the adequacy of YouTube as an effective means to define and depict the role of hospitalists. YouTube was searched on November 17, 2014, using the following search words: "hospitalist," "hospitalist definition," "what is the role of a hospitalist," "define hospitalist," and "who is a hospitalist." Videos found only in the first 10 pages of each search were included. Non-English, noneducational, and nonrelevant videos were excluded. A novel 7-point scoring tool was created by the authors based on the definition of a hospitalist adopted by the Society of Hospital Medicine. Three independent reviewers evaluated, scored, and classified the videos into high, intermediate, and low quality based on the average score. A total of 102 videos out of 855 were identified as relevant and included in the analysis. Videos uploaded by academic institutions had the highest mean score. Only 6 videos were classified as high quality, 53 as intermediate quality, and 42 as low quality, with 82.4% (84/102) of the videos scoring an average of 4 or less. Most videos found in the search of a hospitalist definition are inadequate. Leading medical organizations and academic institutions should consider producing and uploading quality videos to YouTube to help patients and their families better understand the roles and definition of the hospitalist. ©Tamer Hudali, Muralidhar Papireddy, Mukul Bhattarai, Alan Deckard, Susan Hingle. Originally published in the Interactive Journal of Medical Research (http://www.i-jmr.org/), 10.01.2017.

  10. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  11. A method of mobile video transmission based on J2ee

    NASA Astrophysics Data System (ADS)

    Guo, Jian-xin; Zhao, Ji-chun; Gong, Jing; Chun, Yang

    2013-03-01

    As 3G (3rd-generation) networks evolve worldwide, the rising demand for mobile video services and the enormous growth of video on the internet is creating major new revenue opportunities for mobile network operators and application developers. The text introduced a method of mobile video transmission based on J2ME, giving the method of video compressing, then describing the video compressing standard, and then describing the software design. The proposed mobile video method based on J2EE is a typical mobile multimedia application, which has a higher availability and a wide range of applications. The users can get the video through terminal devices such as phone.

  12. Inferring consistent functional interaction patterns from natural stimulus FMRI data

    PubMed Central

    Sun, Jiehuan; Hu, Xintao; Huang, Xiu; Liu, Yang; Li, Kaiming; Li, Xiang; Han, Junwei; Guo, Lei

    2014-01-01

    There has been increasing interest in how the human brain responds to natural stimulus such as video watching in the neuroimaging field. Along this direction, this paper presents our effort in inferring consistent and reproducible functional interaction patterns under natural stimulus of video watching among known functional brain regions identified by task-based fMRI. Then, we applied and compared four statistical approaches, including Bayesian network modeling with searching algorithms: greedy equivalence search (GES), Peter and Clark (PC) analysis, independent multiple greedy equivalence search (IMaGES), and the commonly used Granger causality analysis (GCA), to infer consistent and reproducible functional interaction patterns among these brain regions. It is interesting that a number of reliable and consistent functional interaction patterns were identified by the GES, PC and IMaGES algorithms in different participating subjects when they watched multiple video shots of the same semantic category. These interaction patterns are meaningful given current neuroscience knowledge and are reasonably reproducible across different brains and video shots. In particular, these consistent functional interaction patterns are supported by structural connections derived from diffusion tensor imaging (DTI) data, suggesting the structural underpinnings of consistent functional interactions. Our work demonstrates that specific consistent patterns of functional interactions among relevant brain regions might reflect the brain's fundamental mechanisms of online processing and comprehension of video messages. PMID:22440644

  13. Prevalence of video game use, cigarette smoking, and acceptability of a video game-based smoking cessation intervention among online adults.

    PubMed

    Raiff, Bethany R; Jarvis, Brantley P; Rapoza, Darion

    2012-12-01

    Video games may serve as an ideal platform for developing and implementing technology-based contingency management (CM) interventions for smoking cessation as they can be used to address a number of barriers to the utilization of CM (e.g., replacing monetary rewards with virtual game-based rewards). However, little is known about the relationship between video game playing and cigarette smoking. The current study determined the prevalence of video game use, video game practices, and the acceptability of a video game-based CM intervention for smoking cessation among adult smokers and nonsmokers, including health care professionals. In an online survey, participants (N = 499) answered questions regarding their cigarette smoking and video game playing practices. Participants also reported if they believed a video game-based CM intervention could motivate smokers to quit and if they would recommend such an intervention. Nearly half of the participants surveyed reported smoking cigarettes, and among smokers, 74.5% reported playing video games. Video game playing was more prevalent in smokers than nonsmokers, and smokers reported playing more recently, for longer durations each week, and were more likely to play social games than nonsmokers. Most participants (63.7%), including those who worked as health care professionals, believed that a video game-based CM intervention would motivate smokers to quit and would recommend such an intervention to someone trying to quit (67.9%). Our findings suggest that delivering technology-based smoking cessation interventions via video games has the potential to reach substantial numbers of smokers and that most smokers, nonsmokers, and health care professionals endorsed this approach.

  14. Automated tracking of whiskers in videos of head fixed rodents.

    PubMed

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  15. Automated Tracking of Whiskers in Videos of Head Fixed Rodents

    PubMed Central

    Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058

  16. Analysis of youtube as a source of information for peripheral neuropathy.

    PubMed

    Gupta, Harsh V; Lee, Ricky W; Raina, Sunil K; Behrle, Brian L; Hinduja, Archana; Mittal, Manoj K

    2016-01-01

    YouTube is an important resource for patients. No study has evaluated the information on peripheral neuropathy disseminated by YouTube videos. In this study, our aim was to perform a systematic review of information on YouTube regarding peripheral neuropathy. The Web site (www.youtube.com) was searched between September 19 and 21, 2014, for the terms "neuropathy," "peripheral neuropathy," "diabetic neuropathy," "neuropathy causes," and "neuropathy treatment." Two hundred videos met the inclusion criteria. Healthcare professionals accounted for almost half of the treatment videos (41 of 92; 44.6%), and most came from chiropractors (18 of 41; 43.9%). Alternative medicine was cited most frequently among the treatment discussions (54 of 145, 37.2%), followed by devices (38 of 145, 26.2%), and pharmacological treatments (23 of 145, 15.9%). Approximately half of the treatment options discussed in the videos were not evidence-based. Caution should be exercised when YouTube videos are used as a patient resource. © 2015 Wiley Periodicals, Inc.

  17. Scene Analysis: Non-Linear Spatial Filtering for Automatic Target Detection.

    DTIC Science & Technology

    1982-12-01

    In this thesis, a method for two-dimensional pattern recognition was developed and tested. The method included a global search scheme for candidate...test global switch TYPEO Creating negative video file only.W 11=0 12=256 13=512 14=768 GO 70 2 1 TYPE" Creating negative and horizontally flipped video...purpose was to develop a base of image processing software for the AFIT Digital Signal Processing Laboratory NOVA- ECLIPSE minicomputer system, for

  18. Using video-based observation research methods in primary care health encounters to evaluate complex interactions.

    PubMed

    Asan, Onur; Montague, Enid

    2014-01-01

    The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. We reviewed studies in the literature which used video methods in health care research, and we also used our own experience based on the video studies we conducted in primary care settings. This paper highlighted the benefits of using video techniques, such as multi-channel recording and video coding, and compared "unmanned" video recording with the traditional observation method in primary care research. We proposed a list that can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles, researchers should anticipate when using video recording methods in future studies. With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilised as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches.

  19. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  20. Having mentors and campus social networks moderates the impact of worries and video gaming on depressive symptoms: a moderated mediation analysis.

    PubMed

    Lee, Jong-Sun; Jeong, Bumseok

    2014-05-05

    Easy access to the internet has spawned a wealth of research to investigate the effects of its use on depression. However, one limitation of many previous studies is that they disregard the interactive mechanisms of risk and protective factors. The aim of the present study was to investigate a resilience model in the relationship between worry, daily internet video game playing, daily sleep duration, mentors, social networks and depression, using a moderated mediation analysis. 6068 Korean undergraduate and graduate students participated in this study. The participants completed a web-based mental health screening questionnaire including the Beck Depression Inventory (BDI) and information about number of worries, number of mentors, number of campus social networks, daily sleep duration, daily amount of internet video game playing and daily amount of internet searching on computer or smartphone. A moderated mediation analysis was carried out using the PROCESS macro which allowed the inclusion of mediators and moderator in the same model. The results showed that the daily amount of internet video game playing and daily sleep duration partially mediated the association between the number of worries and the severity of depression. In addition, the mediating effect of the daily amount of internet video game playing was moderated by both the number of mentors and the number of campus social networks. The current findings indicate that the negative impact of worry on depression through internet video game playing can be buffered when students seek to have a number of mentors and campus social networks. Interventions should therefore target individuals who have higher number of worries but seek only a few mentors or campus social networks. Social support via campus mentorship and social networks ameliorate the severity of depression in university students.

  1. Adherent Raindrop Modeling, Detectionand Removal in Video.

    PubMed

    You, Shaodi; Tan, Robby T; Kawakami, Rei; Mukaigawa, Yasuhiro; Ikeuchi, Katsushi

    2016-09-01

    Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Modeling, detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatio-temporal derivatives of raindrops. To accomplish the idea, we first model adherent raindrops using law of physics, and detect raindrops based on these models in combination with motion and intensity temporal derivatives of the input video. Having detected the raindrops, we remove them and restore the images based on an analysis that some areas of raindrops completely occludes the scene, and some other areas occlude only partially. For partially occluding areas, we restore them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity derivative. For completely occluding areas, we recover them by using a video completion technique. Experimental results using various real videos show the effectiveness of our method.

  2. Video-Based Modeling: Differential Effects due to Treatment Protocol

    ERIC Educational Resources Information Center

    Mason, Rose A.; Ganz, Jennifer B.; Parker, Richard I.; Boles, Margot B.; Davis, Heather S.; Rispoli, Mandy J.

    2013-01-01

    Identifying evidence-based practices for individuals with disabilities requires specification of procedural implementation. Video-based modeling (VBM), consisting of both video self-modeling and video modeling with others as model (VMO), is one class of interventions that has frequently been explored in the literature. However, current information…

  3. Real-Time Cameraless Measurement System Based on Bioelectrical Ventilatory Signals to Evaluate Fear and Anxiety.

    PubMed

    Soh, Zu; Matsuno, Motoki; Yoshida, Masayuki; Tsuji, Toshio

    2018-04-01

    Fear and anxiety in fish are generally evaluated by video-based behavioral analysis. However, it is difficult to distinguish the psychological state of fish exclusively through video analysis, particularly whether the fish are freezing, which represents typical fear behavior, or merely resting. We propose a system that can measure bioelectrical signals called ventilatory signals and simultaneously analyze swimming behavior in real time. Experimental results comparing the behavioral analysis of the proposed system and the camera system showed a low error level with an average absolute position error of 9.75 ± 3.12 mm (about one-third of the body length) and a correlation between swimming speeds of r = 0.93 ± 0.07 (p < 0.01). We also exposed the fish to zebrafish skin extracts containing alarm substances that induce fear and anxiety responses to evaluate their emotional changes. The results confirmed that this solution significantly changed all behavioral and ventilatory signal indices obtained by the proposed system (p < 0.01). By combining the behavioral and ventilatory signal indices, we could detect fear and anxiety with a discrimination rate of 83.3% ± 16.7%. Furthermore, we found that the decreasing fear and anxiety over time could be detected according to the peak frequency of the ventilatory signals, which cannot be measured through video analysis.

  4. Video Clips for Youtube: Collaborative Video Creation as an Educational Concept for Knowledge Acquisition and Attitude Change Related to Obesity Stigmatization

    ERIC Educational Resources Information Center

    Zahn, Carmen; Schaeffeler, Norbert; Giel, Katrin Elisabeth; Wessel, Daniel; Thiel, Ansgar; Zipfel, Stephan; Hesse, Friedrich W.

    2014-01-01

    Mobile phones and advanced web-based video tools have pushed forward new paradigms for using video in education: Today, students can readily create and broadcast their own digital videos for others and create entirely new patterns of video-based information structures for modern online-communities and multimedia environments. This paradigm shift…

  5. Concept of Video Bookmark (Videomark) and Its Application to the Collaborative Indexing of Lecture Video in Video-Based Distance Education

    ERIC Educational Resources Information Center

    Haga, Hirohide

    2004-01-01

    This article describes the development of the video bookmark, hereinafter referred to as the videomark, and its application to the collaborative indexing of the lecture video in video-based distance education system. The combination of the videomark system with the bulletin board system (BBS), which is another network tool used for discussion, is…

  6. A video-based transdiagnostic REBT universal prevention program for internalizing problems in adolescents: study protocol of a cluster randomized controlled trial.

    PubMed

    Păsărelu, Costina Ruxandra; Dobrean, Anca

    2018-04-13

    Internalizing problems are the most prevalent mental health problems in adolescents. Transdiagnostic programs are promising manners to treat multiple problems within the same protocol, however, there is limited research regarding the efficacy of such programs delivered as universal prevention programs in school settings. Therefore, the present study aims to investigate the efficacy of a video-based transdiagnostic rational emotive behavioral therapy (REBT) universal prevention program, for internalizing problems. The second objective of the present paper will be to investigate the subsequent mechanisms of change, namely maladaptive cognitions. A two-arm parallel randomized controlled trial will be conducted, with two groups: a video-based transdiagnostic REBT universal prevention program and a wait list control. Power analysis indicated that the study will involve 338 participants. Adolescents with ages between 12 and 17 years old, from several middle schools and high schools, will be invited to participate. Assessments will be conducted at four time points: baseline (T 1 ), post-intervention (T 2 ), 3 months follow-up (T 3 ) and 12 months follow-up (T 4 ). Intent-to-treat analysis will be used in order to investigate significant differences between the two groups in both primary and secondary outcomes. This is the first randomized controlled trial that aims to investigate the efficacy and mechanisms of change of a video-based transdiagnostic REBT universal prevention program, delivered in a school context. The present study has important implications for developing efficient prevention programs, interactive, that will aim to target within the same protocol both anxiety and depressive symptoms. ClinicalTrials.gov: NCT02756507 . Registered on 25 April 2016.

  7. Video Analysis of Rolling Cylinders

    ERIC Educational Resources Information Center

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  8. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  9. Analysis of Soot Propensity in Combustion Processes Using Optical Sensors and Video Magnification

    PubMed Central

    Fuentes, Andrés; Reszka, Pedro; Carvajal, Gonzalo

    2018-01-01

    Industrial combustion processes are an important source of particulate matter, causing significant pollution problems that affect human health, and are a major contributor to global warming. The most common method for analyzing the soot emission propensity in flames is the Smoke Point Height (SPH) analysis, which relates the fuel flow rate to a critical flame height at which soot particles begin to leave the reactive zone through the tip of the flame. The SPH and is marked by morphological changes on the flame tip. SPH analysis is normally done through flame observations with the naked eye, leading to high bias. Other techniques are more accurate, but are not practical to implement in industrial settings, such as the Line Of Sight Attenuation (LOSA), which obtains soot volume fractions within the flame from the attenuation of a laser beam. We propose the use of Video Magnification techniques to detect the flame morphological changes and thus determine the SPH minimizing observation bias. We have applied for the first time Eulerian Video Magnification (EVM) and Phase-based Video Magnification (PVM) on an ethylene laminar diffusion flame. The results were compared with LOSA measurements, and indicate that EVM is the most accurate method for SPH determination. PMID:29751625

  10. Teaching Astronomy Using Tracker

    ERIC Educational Resources Information Center

    Belloni, Mario; Christian, Wolfgang; Brown, Douglas

    2013-01-01

    A recent paper in this journal presented a set of innovative uses of video analysis for introductory physics using Tracker. In addition, numerous other papers have described how video analysis can be a meaningful part of introductory courses. Yet despite this, there are few resources for using video analysis in introductory astronomy classes. In…

  11. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.

  12. Usability of aerial video footage for 3-D scene reconstruction and structural damage assessment

    NASA Astrophysics Data System (ADS)

    Cusicanqui, Johnny; Kerle, Norman; Nex, Francesco

    2018-06-01

    Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of texture, or moving objects. We conclude that not only are current video data more rapidly available than photos, but they also have a comparable ability to assist in image-based structural damage assessment and other post-disaster activities.

  13. Student perceptions of a simulation-based flipped classroom for the surgery clerkship: A mixed-methods study.

    PubMed

    Liebert, Cara A; Mazer, Laura; Bereknyei Merrell, Sylvia; Lin, Dana T; Lau, James N

    2016-09-01

    The flipped classroom, a blended learning paradigm that uses pre-session online videos reinforced with interactive sessions, has been proposed as an alternative to traditional lectures. This article investigates medical students' perceptions of a simulation-based, flipped classroom for the surgery clerkship and suggests best practices for implementation in this setting. A prospective cohort of students (n = 89), who were enrolled in the surgery clerkship during a 1-year period, was taught via a simulation-based, flipped classroom approach. Students completed an anonymous, end-of-clerkship survey regarding their perceptions of the curriculum. Quantitative analysis of Likert responses and qualitative analysis of narrative responses were performed. Students' perceptions of the curriculum were positive, with 90% rating it excellent or outstanding. The majority reported the curriculum should be continued (95%) and applied to other clerkships (84%). The component received most favorably by the students was the simulation-based skill sessions. Students rated the effectiveness of the Khan Academy-style videos the highest compared with other video formats (P < .001). Qualitative analysis identified 21 subthemes in 4 domains: general positive feedback, educational content, learning environment, and specific benefits to medical students. The students reported that the learning environment fostered accountability and self-directed learning. Specific perceived benefits included preparation for the clinical rotation and the National Board of Medical Examiners shelf exam, decreased class time, socialization with peers, and faculty interaction. Medical students' perceptions of a simulation-based, flipped classroom in the surgery clerkship were overwhelmingly positive. The flipped classroom approach can be applied successfully in a surgery clerkship setting and may offer additional benefits compared with traditional lecture-based curricula. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Terminal Performance of Lead Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video

    DTIC Science & Technology

    2016-04-04

    Terminal Performance of Lead-Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video ELIJAH COURTNEY, AMY...quantified using high speed video . The temporary stretch cavities and permanent wound cavities are also characterized. Two factors tend to re- duce the...Performance of Lead-Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video cavity. In addition, stretching can also

  15. A Benchmark Dataset and Saliency-guided Stacked Autoencoders for Video-based Salient Object Detection.

    PubMed

    Li, Jia; Xia, Changqun; Chen, Xiaowu

    2017-10-12

    Image-based salient object detection (SOD) has been extensively studied in past decades. However, video-based SOD is much less explored due to the lack of large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos. In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects who free-view all videos. From the user data, we find that salient objects in a video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object/region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for videobased salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliencyguided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at the pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are constructed in an unsupervised manner that automatically infers a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. In experiments, the proposed unsupervised approach is compared with 31 state-of-the-art models on the proposed dataset and outperforms 30 of them, including 19 imagebased classic (unsupervised or non-deep learning) models, six image-based deep learning models, and five video-based unsupervised models. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.

  16. The right frame of reference makes it simple: an example of introductory mechanics supported by video analysis of motion

    NASA Astrophysics Data System (ADS)

    Klein, P.; Gröber, S.; Kuhn, J.; Fleischhauer, A.; Müller, A.

    2015-01-01

    The selection and application of coordinate systems is an important issue in physics. However, considering different frames of references in a given problem sometimes seems un-intuitive and is difficult for students. We present a concrete problem of projectile motion which vividly demonstrates the value of considering different frames of references. We use this example to explore the effectiveness of video-based motion analysis (VBMA) as an instructional technique at university level in enhancing students’ understanding of the abstract concept of coordinate systems. A pilot study with 47 undergraduate students indicates that VBMA instruction improves conceptual understanding of this issue.

  17. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  18. Research of Pedestrian Crossing Safety Facilities Based on the Video Detection

    NASA Astrophysics Data System (ADS)

    Li, Sheng-Zhen; Xie, Quan-Long; Zang, Xiao-Dong; Tang, Guo-Jun

    Since that the pedestrian crossing facilities at present is not perfect, pedestrian crossing is in chaos and pedestrians from opposite direction conflict and congest with each other, which severely affects the pedestrian traffic efficiency, obstructs the vehicle and bringing about some potential security problems. To solve these problems, based on video identification, a pedestrian crossing guidance system was researched and designed. It uses the camera to monitor the pedestrians in real time and sums up the number of pedestrians through video detection program, and a group of pedestrian's induction lamp array is installed at the interval of crosswalk, which adjusts color display according to the proportion of pedestrians from both sides to guide pedestrians from both opposite directions processing separately. The emulation analysis result from cellular automaton shows that the system reduces the pedestrian crossing conflict, shortens the time of pedestrian crossing and improves the safety of pedestrians crossing.

  19. Automatic textual annotation of video news based on semantic visual object extraction

    NASA Astrophysics Data System (ADS)

    Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem

    2003-12-01

    In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.

  20. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  1. Acute vestibular syndrome: clinical head impulse test versus video head impulse test.

    PubMed

    Celebisoy, Nese

    2018-03-05

    HINTS battery involving head impulse test (HIT), nystagmus, and test of skew is the critical bedside examination to differentiate acute unilateral peripheral vestibulopathy from posterior circulation stroke (PCS) in acute vestibular syndrome (AVS). The highest sensitivity component of the battery has been reported to be the horizontal HIT, whereas skew deviation is defined as the most specific but non-sensitive sign for PCS. Video-oculography-based HIT (vHIT) may have an additional power in making the differentiation. If vHIT is undertaken, then both gain and gain asymmetry should be taken into account as anterior inferior cerebellar artery (AICA) strokes are at risk of being misclassified based on VOR gain alone. Further refinement in video technology, increased operator proficiency and incorporation with saccade analysis will increase the sensitivity of vHIT for PCS diagnosis. For the time being, clinical examination seems adequate in frontline diagnostic evaluation of AVS.

  2. An Intervention Based on Video Feedback and Questioning to Improve Tactical Knowledge in Expert Female Volleyball Players.

    PubMed

    Moreno, M Perla; Moreno, Alberto; García-González, Luis; Ureña, Aurelio; Hernández, César; Del Villar, Fernando

    2016-06-01

    This study applied an intervention program, based on video feedback and questioning, to expert female volleyball players to improve their tactical knowledge. The sample consisted of eight female attackers (26 ± 2.6 years old) from the Spanish National Volleyball Team, who were divided into an experimental group (n = 4) and a control group (n = 4). The video feedback and questioning program applied in the study was developed over eight reflective sessions and consisted of three phases: viewing of the selected actions, self-analysis and reflection by the attacker, and joint player-coach analysis. The attackers were videotaped in an actual game and four clips (situations) of each of the attackers were chosen for each reflective session. Two of the clips showed a correct action by the attacker, and two showed an incorrect decision. Tactical knowledge was measured by problem representation with a verbal protocol. The members of the experimental group showed adaptations in long-term memory, significantly improving their tactical knowledge. With respect to conceptual content, there was an increase in the total number of conditions verbalized by the players; with respect to conceptual sophistication, there was an increase in the indication of appropriate conditions with two or more details; and finally, with respect to conceptual structure, there was an increase in the use of double or triple conceptual structures. The intervention program, based on video feedback and questioning, in addition to on-court training sessions of expert volleyball players, appears to improve the athletes' tactical knowledge. © The Author(s) 2016.

  3. Flight State Information Inference with Application to Helicopter Cockpit Video Data Analysis Using Data Mining Techniques

    NASA Astrophysics Data System (ADS)

    Shin, Sanghyun

    The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm based on signal processing and abrupt change detection techniques is proposed to identify types of warning alarms and to detect the occurrence times of individual alarms from cockpit audio data. Those proposed algorithms are then successfully applied to simulated and real helicopter cockpit video data to demonstrate and validate their performance.

  4. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  5. Magnetic Braking: A Video Analysis

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Abella-Palacios, A. J.

    2012-10-01

    This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in kinematics in introductory-level physics.1,2 By using digital videos frame advance features and "marking" the position of a moving object in each frame, students are able to more precisely determine the position of an object at much smaller time increments than would be possible with common time devices. Once the student collects data consisting of positions and times, these values may be manipulated to determine velocity and acceleration. There are a variety of commercial and free applications that can be used for video analysis. Because the relevant technology has become inexpensive, video analysis has become a prevalent tool in introductory physics courses.

  6. Video Observations Encompassing the 2002 Leonid Storm: First Results and a Revised Photometric Procedure for Video Meteor Analysis

    NASA Technical Reports Server (NTRS)

    Cooke, William J.; Suggs, Robert; Swift, Wesley; Gural, Peter S.; Brown, Peter; Ellis, Jim (Technical Monitor)

    2002-01-01

    During the 2001 Leonid storm, Marshall Space Flight Center, with the cooperation of the University of Western Ontario and the United States Air Force, deployed 6 teams of observers equipped with intensified video systems to sites located in North America, the Pacific, and Mongolia. The campaign was extremely successful, with the entire period of enhanced Leonid activity (over 16 hours) captured on video tape in a consistent manner. We present the first results from the analysis of this unique, 2 terabyte data set and discuss the problems involved in reducing large amounts of video meteor data. In particular, the question of how to determine meteor masses though photometric analysis will be re-examined, and new techniques will be proposed that eliminate some of the deficiencies suffered by the techniques currently employed in video meteor analysis.

  7. Video Game Based Learning in English Grammar

    ERIC Educational Resources Information Center

    Singaravelu, G.

    2008-01-01

    The study enlightens the effectiveness of Video Game Based Learning in English Grammar at standard VI. A Video Game package was prepared and it consisted of self-learning activities in play way manner which attracted the minds of the young learners. Chief objective: Find out the effectiveness of Video-Game based learning in English grammar.…

  8. Health-risk correlates of video-game playing among adults.

    PubMed

    Weaver, James B; Mays, Darren; Sargent Weaver, Stephanie; Kannenberg, Wendi; Hopkins, Gary L; Eroğlu, Doğan; Bernhardt, Jay M

    2009-10-01

    Although considerable research suggests that health-risk factors vary as a function of video-game playing among young people, direct evidence of such linkages among adults is lacking. The goal of this study was to distinguish adult video-game players from nonplayers on the basis of personal and environmental factors. It was hypothesized that adults who play video games, compared to nonplayers, would evidence poorer perceptions of their health, greater reliance on Internet-facilitated social support, more extensive media use, and higher BMI. It was further hypothesized that different patterns of linkages between video-game playing and health-risk factors would emerge by gender. A cross-sectional, Internet-based survey was conducted in 2006 with a sample of adults from the Seattle-Tacoma area (n=562), examining health risks; media use behaviors and perceptions, including those related to video-game playing; and demographics. Statistical analyses conducted in 2008 to compare video-game players and nonplayers included bivariate descriptive statistics, stepwise discriminant analysis, and ANOVA. A total of 45.1% of respondents reported playing video games. Female video-game players reported greater depression (M=1.57) and poorer health status (M=3.90) than female nonplayers (depression, M=1.13; health status, M=3.57). Male video-game players reported higher BMI (M=5.31) and more Internet use time (M=2.55) than male nonplayers (BMI, M=5.19; Internet use, M=2.36). The only determinant common to female and male video-game players was greater reliance on the Internet for social support. A number of determinants distinguished video-game players from nonplayers, and these factors differed substantially between men and women. The data illustrate the need for further research among adults to clarify how to use digital opportunities more effectively to promote health and prevent disease.

  9. The impacts of observing flawed and flawless demonstrations on clinical skill learning.

    PubMed

    Domuracki, Kurt; Wong, Arthur; Olivieri, Lori; Grierson, Lawrence E M

    2015-02-01

    Clinical skills expertise can be advanced through accessible and cost-effective video-based observational practice activities. Previous findings suggest that the observation of performances of skills that include flaws can be beneficial to trainees. Observing the scope of variability within a skilled movement allows learners to develop strategies to manage the potential for and consequences associated with errors. This study tests this observational learning approach on the development of the skills of central line insertion (CLI). Medical trainees with no CLI experience (n = 39) were randomised to three observational practice groups: a group which viewed and assessed videos of an expert performing a CLI without any errors (F); a group which viewed and assessed videos that contained a mix of flawless and errorful performances (E), and a group which viewed the same videos as the E group but were also given information concerning the correctness of their assessments (FA). All participants interacted with their observational videos each day for 4 days. Following this period, participants returned to the laboratory and performed a simulation-based insertion, which was assessed using a standard checklist and a global rating scale for the skill. These ratings served as the dependent measures for analysis. The checklist analysis revealed no differences between observational learning groups (grand mean ± standard error: [20.3 ± 0.7]/25). However, the global rating analysis revealed a main effect of group (d.f.2,36 = 4.51, p = 0.018), which describes better CLI performance in the FA group, compared with the F and E groups. Observational practice that includes errors improves the global performance aspects of clinical skill learning as long as learners are given confirmation that what they are observing is errorful. These findings provide a refined perspective on the optimal organisation of skill education programmes that combine physical and observational practice activities. © 2015 John Wiley & Sons Ltd.

  10. Using video-based observation research methods in primary care health encounters to evaluate complex interactions

    PubMed Central

    Asan, Onur; Montague, Enid

    2015-01-01

    Objective The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. Methods We reviewed studies in the literature which used video methods in health care research and, we also used our own experience based on the video studies we conducted in primary care settings. Results This paper highlighted the benefits of using video techniques such as multi-channel recording and video coding and compared “unmanned” video recording with the traditional observation method in primary care research. We proposed a list, which can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles researchers should anticipate when using video recording methods in future studies. Conclusion With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilized as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches. PMID:25479346

  11. Can "YouTube" help students in learning surface anatomy?

    PubMed

    Azer, Samy A

    2012-07-01

    In a problem-based learning curriculum, most medical students research the Internet for information for their "learning issues." Internet sites such as "YouTube" have become a useful resource for information. This study aimed at assessing YouTube videos covering surface anatomy. A search of YouTube was conducted from November 8 to 30, 2010 using research terms "surface anatomy," "anatomy body painting," "living anatomy," "bone landmarks," and "dermatomes" for surface anatomy-related videos. Only relevant video clips in the English language were identified and related URL recorded. For each videotape the following information were collected: title, authors, duration, number of viewers, posted comments, and total number of days on YouTube. The data were statistically analyzed and videos were grouped into educationally useful and non-useful videos on the basis of major and minor criteria covering technical, content, authority, and pedagogy parameters. A total of 235 YouTube videos were screened and 57 were found to have relevant information to surface anatomy. Analysis revealed that 15 (27%) of the videos provided useful information on surface anatomy. These videos scored (mean ± SD, 14.0 ± 0.7) and mainly covered surface anatomy of the shoulder, knee, muscles of the back, leg, and ankle, carotid artery, dermatomes, and anatomical positions. The other 42 (73%) videos were not useful educationally, scoring (mean ± SD, 7.4 ± 1.8). The total viewers of all videos were 1,058,634. Useful videos were viewed by 497,925 (47% of total viewers). The total viewership per day was 750 for useful videos and 652 for non-useful videos. No video clips covering surface anatomy of the head and neck, blood vessels and nerves of upper and lower limbs, chest and abdominal organs/structures were found. Currently, YouTube is an inadequate source of information for learning surface anatomy. More work is needed from medical schools and educators to add useful videos on YouTube covering this area.

  12. Evaluation of educational content of YouTube videos relating to neurogenic bladder and intermittent catheterization

    PubMed Central

    Ho, Matthew; Stothers, Lynn; Lazare, Darren; Tsang, Brian; Macnab, Andrew

    2015-01-01

    Introduction: Many patients conduct internet searches to manage their own health problems, to decide if they need professional help, and to corroborate information given in a clinical encounter. Good information can improve patients’ understanding of their condition and their self-efficacy. Patients with spinal cord injury (SCI) featuring neurogenic bladder (NB) require knowledge and skills related to their condition and need for intermittent catheterization (IC). Methods: Information quality was evaluated in videos accessed via YouTube relating to NB and IC using search terms “neurogenic bladder intermittent catheter” and “spinal cord injury intermittent catheter.” Video content was independently rated by 3 investigators using criteria based on European Urological Association (EAU) guidelines and established clinical practice. Results: In total, 71 videos met the inclusion criteria. Of these, 12 (17%) addressed IC and 50 (70%) contained information on NB. The remaining videos met inclusion criteria, but did not contain information relevant to either IC or NB. Analysis indicated poor overall quality of information, with some videos with information contradictory to EAU guidelines for IC. High-quality videos were randomly distributed by YouTube. IC videos featuring a healthcare narrator scored significantly higher than patient-narrated videos, but not higher than videos with a merchant narrator. About half of the videos contained commercial content. Conclusions: Some good-quality educational videos about NB and IC are available on YouTube, but most are poor. The videos deemed good quality were not prominently ranked by the YouTube search algorithm, consequently user access is less likely. Study limitations include the limit of 50 videos per category and the use of a de novo rating tool. Information quality in videos with healthcare narrators was not higher than in those featuring merchant narrators. Better material is required to improve patients’ understanding of their condition. PMID:26644803

  13. A meta-analysis of active video games on health outcomes among children and adolescents.

    PubMed

    Gao, Z; Chen, S; Pasco, D; Pope, Z

    2015-09-01

    This meta-analysis synthesizes current literature concerning the effects of active video games (AVGs) on children/adolescents' health-related outcomes. A total of 512 published studies on AVGs were located, and 35 articles were included based on the following criteria: (i) data-based research articles published in English between 1985 and 2015; (ii) studied some types of AVGs and related outcomes among children/adolescents and (iii) had at least one comparison within each study. Data were extracted to conduct comparisons for outcome measures in three separate categories: AVGs and sedentary behaviours, AVGs and laboratory-based exercise, and AVGs and field-based physical activity. Effect size for each entry was calculated with the Comprehensive Meta-Analysis software in 2015. Mean effect size (Hedge's g) and standard deviation were calculated for each comparison. Compared with sedentary behaviours, AVGs had a large effect on health outcomes. The effect sizes for physiological outcomes were marginal when comparing AVGs with laboratory-based exercises. The comparison between AVGs and field-based physical activity had null to moderate effect sizes. AVGs could yield equivalent health benefits to children/adolescents as laboratory-based exercise or field-based physical activity. Therefore, AVGs can be a good alternative for sedentary behaviour and addition to traditional physical activity and sports in children/adolescents. © 2015 World Obesity.

  14. Investigating Students' Use and Adoption of "With-Video Assignments": Lessons Learnt for Video-Based Open Educational Resources

    ERIC Educational Resources Information Center

    Pappas, Ilias O.; Giannakos, Michail N.; Mikalef, Patrick

    2017-01-01

    The use of video-based open educational resources is widespread, and includes multiple approaches to implementation. In this paper, the term "with-video assignments" is introduced to portray video learning resources enhanced with assignments. The goal of this study is to examine the factors that influence students' intention to adopt…

  15. An ASIC-chip for stereoscopic depth analysis in video-real-time based on visual cortical cell behavior.

    PubMed

    Wörgötter, F

    1999-10-01

    In a stereoscopic system both eyes or cameras have a slightly different view. As a consequence small variations between the projected images exist ("disparities") which are spatially evaluated in order to retrieve depth information. We will show that two related algorithmic versions can be designed which recover disparity. Both approaches are based on the comparison of filter outputs from filtering the left and the right image. The difference of the phase components between left and right filter responses encodes the disparity. One approach uses regular Gabor filters and computes the spatial phase differences in a conventional way as described already in 1988 by Sanger. Novel to this approach, however, is that we formulate it in a way which is fully compatible with neural operations in the visual cortex. The second approach uses the apparently paradoxical similarity between the analysis of visual disparities and the determination of the azimuth of a sound source. Animals determine the direction of the sound from the temporal delay between the left and right ear signals. Similarly, in our second approach we transpose the spatially defined problem of disparity analysis into the temporal domain and utilize two resonators implemented in the form of causal (electronic) filters to determine the disparity as local temporal phase differences between the left and right filter responses. This approach permits video real-time analysis of stereo image sequences (see movies at http://www.neurop.ruhr-uni-bochum.de/Real- Time-Stereo) and a FPGA-based PC-board has been developed which performs stereo-analysis at full PAL resolution in video real-time. An ASIC chip will be available in March 2000.

  16. Use of video-based education and tele-health home monitoring after liver transplantation: Results of a novel pilot study.

    PubMed

    Ertel, Audrey E; Kaiser, Tiffany E; Abbott, Daniel E; Shah, Shimul A

    2016-10-01

    In this observational study, we analyzed the feasibility and early results of a perioperative, video-based educational program and tele-health home monitoring model on postoperative care management and readmissions for patients undergoing liver transplantation. Twenty consecutive liver transplantation recipients were provided with tele-health home monitoring and an educational video program during the perioperative period. Vital statistics were tracked and monitored daily with emphasis placed on readings outside of the normal range (threshold violations). Additionally, responses to effectiveness questionnaires were collected retrospectively for analysis. In the study, 19 of the 20 patients responded to the effectiveness questionnaire, with 95% reporting having watched all 10 videos, 68% watching some more than once, and 100% finding them effective in improving their preparedness for understanding their postoperative care. Among these 20 patients, there was an observed 19% threshold violation rate for systolic blood pressure, 6% threshold violation rate for mean blood glucose concentrations, and 8% threshold violation rate for mean weights. This subset of patients had a 90-day readmission rate of 30%. This observational study demonstrates that tele-health home monitoring and video-based educational programs are feasible in liver transplantation recipients and seem to be effective in enhancing the monitoring of vital statistics postoperatively. These data suggest that smart technology is effective in creating a greater awareness and understanding of how to manage postoperative care after liver transplantation. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Multimodal Speaker Diarization.

    PubMed

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  18. Identification and analysis of unsatisfactory psychosocial work situations: a participatory approach employing video-computer interaction.

    PubMed

    Hanse, J J; Forsman, M

    2001-02-01

    A method for psychosocial evaluation of potentially stressful or unsatisfactory situations in manual work was developed. It focuses on subjective responses regarding specific situations and is based on interactive worker assessment when viewing video recordings of oneself. The worker is first video-recorded during work. The video is then displayed on the computer terminal, and the filmed worker clicks on virtual controls on the screen whenever an unsatisfactory psychosocial situation appears; a window of questions regarding psychological demands, mental strain and job control is then opened. A library with pictorial information and comments on the selected situations is formed in the computer. The evaluation system, called PSIDAR, was applied in two case studies, one of manual materials handling in an automotive workshop and one of a group of workers producing and testing instrument panels. The findings indicate that PSIDAR can provide data that are useful in a participatory ergonomic process of change.

  19. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  20. Impact of different cloud deployments on real-time video applications for mobile video cloud users

    NASA Astrophysics Data System (ADS)

    Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos

    2015-02-01

    The latest trend to access mobile cloud services through wireless network connectivity has amplified globally among both entrepreneurs and home end users. Although existing public cloud service vendors such as Google, Microsoft Azure etc. are providing on-demand cloud services with affordable cost for mobile users, there are still a number of challenges to achieve high-quality mobile cloud based video applications, especially due to the bandwidth-constrained and errorprone mobile network connectivity, which is the communication bottleneck for end-to-end video delivery. In addition, existing accessible clouds networking architectures are different in term of their implementation, services, resources, storage, pricing, support and so on, and these differences have varied impact on the performance of cloud-based real-time video applications. Nevertheless, these challenges and impacts have not been thoroughly investigated in the literature. In our previous work, we have implemented a mobile cloud network model that integrates localized and decentralized cloudlets (mini-clouds) and wireless mesh networks. In this paper, we deploy a real-time framework consisting of various existing Internet cloud networking architectures (Google Cloud, Microsoft Azure and Eucalyptus Cloud) and a cloudlet based on Ubuntu Enterprise Cloud over wireless mesh networking technology for mobile cloud end users. It is noted that the increasing trend to access real-time video streaming over HTTP/HTTPS is gaining popularity among both research and industrial communities to leverage the existing web services and HTTP infrastructure in the Internet. To study the performance under different deployments using different public and private cloud service providers, we employ real-time video streaming over the HTTP/HTTPS standard, and conduct experimental evaluation and in-depth comparative analysis of the impact of different deployments on the quality of service for mobile video cloud users. Empirical results are presented and discussed to quantify and explain the different impacts resulted from various cloud deployments, video application and wireless/mobile network setting, and user mobility. Additionally, this paper analyses the advantages, disadvantages, limitations and optimization techniques in various cloud networking deployments, in particular the cloudlet approach compared with the Internet cloud approach, with recommendations of optimized deployments highlighted. Finally, federated clouds and inter-cloud collaboration challenges and opportunities are discussed in the context of supporting real-time video applications for mobile users.

  1. Real-time moment-to-moment emotional responses to narrative and informational breast cancer videos in African American women

    PubMed Central

    Bollinger, Sarah; Kreuter, Matthew W.

    2012-01-01

    In a randomized experiment using moment-to-moment audience analysis methods, we compared women’s emotional responses with a narrative versus informational breast cancer video. Both videos communicated three key messages about breast cancer: (i) understand your breast cancer risk, (ii) talk openly about breast cancer and (iii) get regular mammograms. A community-based convenience sample of African American women (n = 59) used a hand-held audience response device to report the intensity of their emotional reaction while watching one of the two videos. Strong emotions were more likely to correspond to contextual information about characters in the video and less likely to correspond to health content among women who watched the narrative video compared with those who watched the informational video (P < 0.05). Women who watched the narrative video were more likely to report feeling attentive (41 versus 28%, respectively), inspired (54 versus 34%) and proud (30 versus 18%) and less likely to feel upset (8 versus 16%) (all P < 0.05). Women in the narrative group were more likely to mention women’s personal stories than health information in open-ended recall questions, but this did not detract from obtaining health information. Findings suggest that stories can be used to communicate health information without distracting from core health content. PMID:22498923

  2. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    NASA Astrophysics Data System (ADS)

    Pan, Guobing; Xin, Wenhui; Yan, Guozheng; Chen, Jiaoliao

    2011-06-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic.

  3. Expert-novice differences in brain function of field hockey players.

    PubMed

    Wimshurst, Z L; Sowden, P T; Wright, M

    2016-02-19

    The aims of this study were to use functional magnetic resonance imaging to examine the neural bases for perceptual-cognitive superiority in a hockey anticipation task. Thirty participants (15 hockey players, 15 non-hockey players) lay in an MRI scanner while performing a video-based task in which they predicted the direction of an oncoming shot in either a hockey or a badminton scenario. Video clips were temporally occluded either 160 ms before the shot was made or 60 ms after the ball/shuttle left the stick/racquet. Behavioral data showed a significant hockey expertise×video-type interaction in which hockey experts were superior to novices with hockey clips but there were no significant differences with badminton clips. The imaging data on the other hand showed a significant main effect of hockey expertise and of video type (hockey vs. badminton), but the expertise×video-type interaction did not survive either a whole-brain or a small-volume correction for multiple comparisons. Further analysis of the expertise main effect revealed that when watching hockey clips, experts showed greater activation in the rostral inferior parietal lobule, which has been associated with an action observation network, and greater activation than novices in Brodmann areas 17 and 18 and middle frontal gyrus when watching badminton videos. The results provide partial support both for domain-specific and domain-general expertise effects in an action anticipation task. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. TRECVID: the utility of a content-based video retrieval evaluation

    NASA Astrophysics Data System (ADS)

    Hauptmann, Alexander G.

    2006-01-01

    TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.

  5. A simple video-based timing system for on-ice team testing in ice hockey: a technical report.

    PubMed

    Larson, David P; Noonan, Benjamin C

    2014-09-01

    The purpose of this study was to describe and evaluate a newly developed on-ice timing system for team evaluation in the sport of ice hockey. We hypothesized that this new, simple, inexpensive, timing system would prove to be highly accurate and reliable. Six adult subjects (age 30.4 ± 6.2 years) performed on ice tests of acceleration and conditioning. The performance times of the subjects were recorded using a handheld stopwatch, photocell, and high-speed (240 frames per second) video. These results were then compared to allow for accuracy calculations of the stopwatch and video as compared with filtered photocell timing that was used as the "gold standard." Accuracy was evaluated using maximal differences, typical error/coefficient of variation (CV), and intraclass correlation coefficients (ICCs) between the timing methods. The reliability of the video method was evaluated using the same variables in a test-retest analysis both within and between evaluators. The video timing method proved to be both highly accurate (ICC: 0.96-0.99 and CV: 0.1-0.6% as compared with the photocell method) and reliable (ICC and CV within and between evaluators: 0.99 and 0.08%, respectively). This video-based timing method provides a very rapid means of collecting a high volume of very accurate and reliable on-ice measures of skating speed and conditioning, and can easily be adapted to other testing surfaces and parameters.

  6. Developing Cognitive Task Analysis-based Educational Videos for Basic Surgical Skills in Plastic Surgery.

    PubMed

    Yeung, Celine; McMillan, Catherine; Saun, Tomas J; Sun, Kimberly; D'hondt, Veerle; von Schroeder, Herbert P; Martou, Glykeria; Lee, Matthew; Liao, Elizabeth; Binhammer, Paul

    To describe the development of cognitive task analysis (CTA)-based multimedia educational videos for surgical trainees in plastic surgery. A needs assessment survey was used to identify 5 plastic surgery skills on which to focus the educational videos. Three plastic surgeons were video-recorded performing each skill while describing the procedure, and were interviewed with probing questions. Three medical student reviewers coded transcripts and categorized each step into "action," "decision," or "assessment," and created a cognitive demands table (CDT) for each skill. The CDTs were combined into 1 table that was reviewed by the surgeons performing each skill to ensure accuracy. The final CDTs were compared against each surgeon's original transcripts. The total number of steps identified, percentage of steps shared, and the average percentage of steps omitted were calculated. Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada, an urban tertiary care teaching center. Canadian junior plastic surgery residents (n = 78) were sent a needs assessment survey. Four plastic surgeons and 1 orthopedic surgeon performed the skills. Twenty-eight residents responded to the survey (36%). Subcuticular suturing, horizontal and vertical mattress suturing, hand splinting, digital nerve block, and excisional biopsy had the most number of residents (>80%) rank the skills as being skills that students should be able to perform before entering residency. The number of steps identified through CTA ranged from 12 to 29. Percentage of steps shared by all 3 surgeons for each skill ranged from 30% to 48%, while the average percentage of steps that were omitted by each surgeon ranged from 27% to 40%. Instructional videos for basic surgical skills may be generated using CTA to help experts provide comprehensive descriptions of a procedure. A CTA-based educational tool may give trainees access to a broader, objective body of knowledge, allowing them to learn decision-making processes before entering the operating room. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  7. Study protocol for a framework analysis using video review to identify latent safety threats: trauma resuscitation using in situ simulation team training (TRUST).

    PubMed

    Fan, Mark; Petrosoniak, Andrew; Pinkney, Sonia; Hicks, Christopher; White, Kari; Almeida, Ana Paula Siquiera Silva; Campbell, Douglas; McGowan, Melissa; Gray, Alice; Trbovich, Patricia

    2016-11-07

    Errors in trauma resuscitation are common and have been attributed to breakdowns in the coordination of system elements (eg, tools/technology, physical environment and layout, individual skills/knowledge, team interaction). These breakdowns are triggered by unique circumstances and may go unrecognised by trauma team members or hospital administrators; they can be described as latent safety threats (LSTs). Retrospective approaches to identifying LSTs (ie, after they occur) are likely to be incomplete and prone to bias. To date, prospective studies have not used video review as the primary mechanism to identify any and all LSTs in trauma resuscitation. A series of 12 unannounced in situ simulations (ISS) will be conducted to prospectively identify LSTs at a level 1 Canadian trauma centre (over 800 dedicated trauma team activations annually). 4 scenarios have already been designed as part of this protocol based on 5 recurring themes found in the hospital's mortality and morbidity process. The actual trauma team will be activated to participate in the study. Each simulation will be audio/video recorded from 4 different camera angles and transcribed to conduct a framework analysis. Video reviewers will code the videos deductively based on a priori themes of LSTs identified from the literature, and/or inductively based on the events occurring in the simulation. LSTs will be prioritised to target interventions in future work. Institutional research ethics approval has been acquired (SMH REB #15-046). Results will be published in peer-reviewed journals and presented at relevant conferences. Findings will also be presented to key institutional stakeholders to inform mitigation strategies for improved patient safety. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Race and Emotion in Computer-Based HIV Prevention Videos for Emergency Department Patients

    ERIC Educational Resources Information Center

    Aronson, Ian David; Bania, Theodore C.

    2011-01-01

    Computer-based video provides a valuable tool for HIV prevention in hospital emergency departments. However, the type of video content and protocol that will be most effective remain underexplored and the subject of debate. This study employs a new and highly replicable methodology that enables comparisons of multiple video segments, each based on…

  9. Meaningful Learning from Practice: Web-Based Video in Professional Preparation Programmes in University

    ERIC Educational Resources Information Center

    Admiraal, Wilfried

    2014-01-01

    Web-based video is one of the technologies which can support meaningful learning from practice--in addition to practical benefits such as accessibility of practices, flexibility in updating information, and incorporating video into multimedia resources. A multiple case study was set up on the use of a web-based video learning environment in two…

  10. Teaching with Web-Based Videos: Helping Students Grasp the Science in Popular Online Resources

    ERIC Educational Resources Information Center

    Pace, Barbara G.; Jones, Linda Cronin

    2009-01-01

    Today, the use of web-based videos in science classrooms is becoming more and more commonplace. However, these videos are often fast-paced and information rich--science concepts can be fragmented and embedded within larger cultural issues. This article addresses the cognitive difficulties posed by many web-based science videos. Drawing on concepts…

  11. Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning

    ERIC Educational Resources Information Center

    Kenworthy-U'Ren, Amy; Erickson, Anthony

    2009-01-01

    In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…

  12. Video diaries on social media: Creating online communities for geoscience research and education

    NASA Astrophysics Data System (ADS)

    Tong, V.

    2013-12-01

    Making video clips is an engaging way to learn and teach geoscience. As smartphones become increasingly common, it is relatively straightforward for students to produce ';video diaries' by recording their research and learning experience over the course of a science module. Instead of keeping the video diaries for themselves, students may use the social media such as Facebook for sharing their experience and thoughts. There are some potential benefits to link video diaries and social media in pedagogical contexts. For example, online comments on video clips offer useful feedback and learning materials to the students. Students also have the opportunity to engage in geoscience outreach by producing authentic scientific contents at the same time. A video diary project was conducted to test the pedagogical potential of using video diaries on social media in the context of geoscience outreach, undergraduate research and teaching. This project formed part of a problem-based learning module in field geophysics at an archaeological site in the UK. The project involved i) the students posting video clips about their research and problem-based learning in the field on a daily basis; and ii) the lecturer building an online outreach community with partner institutions. In this contribution, I will discuss the implementation of the project and critically evaluate the pedagogical potential of video diaries on social media. My discussion will focus on the following: 1) Effectiveness of video diaries on social media; 2) Student-centered approach of producing geoscience video diaries as part of their research and problem-based learning; 3) Learning, teaching and assessment based on video clips and related commentaries posted on Facebook; and 4) Challenges in creating and promoting online communities for geoscience outreach through the use of video diaries. I will compare the outcomes from this study with those from other pedagogical projects with video clips on geoscience, and evaluate the concept of ';networked public engagement' based on online video diaries.

  13. Digital video clips for improved pedagogy and illustration of scientific research — with illustrative video clips on atomic spectrometry

    NASA Astrophysics Data System (ADS)

    Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary

    1999-12-01

    This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.

  14. Real-time synchronization of kinematic and video data for the comprehensive assessment of surgical skills.

    PubMed

    Dosis, Aristotelis; Bello, Fernando; Moorthy, Krishna; Munz, Yaron; Gillies, Duncan; Darzi, Ara

    2004-01-01

    Surgical dexterity in operating theatres has traditionally been assessed subjectively. Electromagnetic (EM) motion tracking systems such as the Imperial College Surgical Assessment Device (ICSAD) have been shown to produce valid and accurate objective measures of surgical skill. To allow for video integration we have modified the data acquisition and built it within the ROVIMAS analysis software. We then used ActiveX 9.0 DirectShow video capturing and the system clock as a time stamp for the synchronized concurrent acquisition of kinematic data and video frames. Interactive video/motion data browsing was implemented to allow the user to concentrate on frames exhibiting certain kinematic properties that could result in operative errors. We exploited video-data synchronization to calculate the camera visual hull by identifying all 3D vertices using the ICSAD electromagnetic sensors. We also concentrated on high velocity peaks as a means of identifying potential erroneous movements to be confirmed by studying the corresponding video frames. The outcome of the study clearly shows that the kinematic data are precisely synchronized with the video frames and that the velocity peaks correspond to large and sudden excursions of the instrument tip. We validated the camera visual hull by both video and geometrical kinematic analysis and we observed that graphs containing fewer sudden velocity peaks are less likely to have erroneous movements. This work presented further developments to the well-established ICSAD dexterity analysis system. Synchronized real-time motion and video acquisition provides a comprehensive assessment solution by combining quantitative motion analysis tools and qualitative targeted video scoring.

  15. Pre-trained D-CNN models for detecting complex events in unconstrained videos

    NASA Astrophysics Data System (ADS)

    Robinson, Joseph P.; Fu, Yun

    2016-05-01

    Rapid event detection faces an emergent need to process large videos collections; whether surveillance videos or unconstrained web videos, the ability to automatically recognize high-level, complex events is a challenging task. Motivated by pre-existing methods being complex, computationally demanding, and often non-replicable, we designed a simple system that is quick, effective and carries minimal overhead in terms of memory and storage. Our system is clearly described, modular in nature, replicable on any Desktop, and demonstrated with extensive experiments, backed by insightful analysis on different Convolutional Neural Networks (CNNs), as stand-alone and fused with others. With a large corpus of unconstrained, real-world video data, we examine the usefulness of different CNN models as features extractors for modeling high-level events, i.e., pre-trained CNNs that differ in architectures, training data, and number of outputs. For each CNN, we use 1-fps from all training exemplar to train one-vs-rest SVMs for each event. To represent videos, frame-level features were fused using a variety of techniques. The best being to max-pool between predetermined shot boundaries, then average-pool to form the final video-level descriptor. Through extensive analysis, several insights were found on using pre-trained CNNs as off-the-shelf feature extractors for the task of event detection. Fusing SVMs of different CNNs revealed some interesting facts, finding some combinations to be complimentary. It was concluded that no single CNN works best for all events, as some events are more object-driven while others are more scene-based. Our top performance resulted from learning event-dependent weights for different CNNs.

  16. Application of the Coastal and Marine Ecological Classification Standard to ROV Video Data for Enhanced Analysis of Deep-Sea Habitats in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Ruby, C.; Skarke, A. D.; Mesick, S.

    2016-02-01

    The Coastal and Marine Ecological Classification Standard (CMECS) is a network of common nomenclature that provides a comprehensive framework for organizing physical, biological, and chemical information about marine ecosystems. It was developed by the National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, in collaboration with other feral agencies and academic institutions, as a means for scientists to more easily access, compare, and integrate marine environmental data from a wide range of sources and time frames. CMECS has been endorsed by the Federal Geographic Data Committee (FGDC) as a national metadata standard. The research presented here is focused on the application of CMECS to deep-sea video and environmental data collected by the NOAA ROV Deep Discoverer and the NOAA Ship Okeanos Explorer in the Gulf of Mexico in 2011-2014. Specifically, a spatiotemporal index of the physical, chemical, biological, and geological features observed in ROV video records was developed in order to allow scientist, otherwise unfamiliar with the specific content of existing video data, to rapidly determine the abundance and distribution of features of interest, and thus evaluate the applicability of those video data to their research. CMECS units (setting, component, or modifier) for seafloor images extracted from high-definition ROV video data were established based upon visual assessment as well as analysis of coincident environmental sensor (temperature, conductivity), navigation (ROV position, depth, attitude), and log (narrative dive summary) data. The resulting classification units were integrated into easily searchable textual and geo-databases as well as an interactive web map. The spatial distribution and associations of deep-sea habitats as indicated by CMECS classifications are described and optimized methodological approaches for application of CMECS to deep-sea video and environmental data are presented.

  17. Machinima and Video-Based Soft-Skills Training for Frontline Healthcare Workers.

    PubMed

    Conkey, Curtis A; Bowers, Clint; Cannon-Bowers, Janis; Sanchez, Alicia

    2013-02-01

    Multimedia training methods have traditionally relied heavily on video-based technologies, and significant research has shown these to be very effective training tools. However, production of video is time and resource intensive. Machinima technologies are based on videogaming technology. Machinima technology allows videogame technology to be manipulated into unique scenarios based on entertainment or training and practice applications. Machinima is the converting of these unique scenarios into video vignettes that tell a story. These vignettes can be interconnected with branching points in much the same way that education videos are interconnected as vignettes between decision points. This study addressed the effectiveness of machinima-based soft-skills education using avatar actors versus the traditional video teaching application using human actors in the training of frontline healthcare workers. This research also investigated the difference between presence reactions when using avatar actor-produced video vignettes as compared with human actor-produced video vignettes. Results indicated that the difference in training and/or practice effectiveness is statistically insignificant for presence, interactivity, quality, and the skill of assertiveness. The skill of active listening presented a mixed result indicating the need for careful attention to detail in situations where body language and facial expressions are critical to communication. This study demonstrates that a significant opportunity exists for the exploitation of avatar actors in video-based instruction.

  18. Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.

    PubMed

    Lakshmi, Priya G G; Domnic, S

    2014-12-01

    Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.

  19. An Interactive Assessment Framework for Visual Engagement: Statistical Analysis of a TEDx Video

    ERIC Educational Resources Information Center

    Farhan, Muhammad; Aslam, Muhammad

    2017-01-01

    This study aims to assess the visual engagement of the video lectures. This analysis can be useful for the presenter and student to find out the overall visual attention of the videos. For this purpose, a new algorithm and data collection module are developed. Videos can be transformed into a dataset with the help of data collection module. The…

  20. Video Analysis of Anterior Cruciate Ligament (ACL) Injuries

    PubMed Central

    Carlson, Victor R.; Sheehan, Frances T.; Boden, Barry P.

    2016-01-01

    Background: As the most viable method for investigating in vivo anterior cruciate ligament (ACL) rupture, video analysis is critical for understanding ACL injury mechanisms and advancing preventative training programs. Despite the limited number of published studies involving video analysis, much has been gained through evaluating actual injury scenarios. Methods: Studies meeting criteria for this systematic review were collected by performing a broad search of the ACL literature with use of variations and combinations of video recordings and ACL injuries. Both descriptive and analytical studies were included. Results: Descriptive studies have identified specific conditions that increase the likelihood of an ACL injury. These conditions include close proximity to opposing players or other perturbations, high shoe-surface friction, and landing on the heel or the flat portion of the foot. Analytical studies have identified high-risk joint angles on landing, such as a combination of decreased ankle plantar flexion, decreased knee flexion, and increased hip flexion. Conclusions: The high-risk landing position appears to influence the likelihood of ACL injury to a much greater extent than inherent risk factors. As such, on the basis of the results of video analysis, preventative training should be applied broadly. Kinematic data from video analysis have provided insights into the dominant forces that are responsible for the injury (i.e., axial compression with potential contributions from quadriceps contraction and valgus loading). With the advances in video technology currently underway, video analysis will likely lead to enhanced understanding of non-contact ACL injury. PMID:27922985

  1. Medical information on the Internet: Quality assessment of lumbar puncture and neuroaxial block techniques on YouTube.

    PubMed

    Rössler, Bernhard; Lahner, Daniel; Schebesta, Karl; Chiari, Astrid; Plöchl, Walter

    2012-07-01

    The Internet has become the largest, most up-to-date source for medical information. Besides enhancing patients' knowledge, the freely accessible audio-visual files have an impact on medical education. However little is known about their characteristics. In this manuscript the quality of lumbar puncture (LP) and spinal anaesthesia (SA) videos available on YouTube is assessed. This retrospective analysis was based on a search for LP and SA on YouTube. Videos were evaluated using essential key points (5 in SA, 4 in LP) and 3 safety indicators. Furthermore, violation of sterile working techniques and a rating whether the video must be regarded as dangerously misleading was performed. From 2321 hits matching the keywords, 38 videos were eligible for evaluation. In LP videos, 14% contained information on all, 4.5% on 3 and 4.5% on 2 key points, 59% on 1 and 18% on no key point. Regarding SA, no video contained information on all 5 key points, 56% on 2-4 and 25% on 1 key point, 19% did not contain any essential information. A sterility violation occurred in 11%, and 13% were classified as dangerously misleading. Even though high quality videos are available, the quality of video clips is generally low. The fraction of videos that were not performed in an aseptic manner is low, but these pose a substantial risk to patients. Consequently, more high-quality, institutional medical learning videos must be made available in the light of the increased utilization on the Internet. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. An Analysis of Widely Viewed YouTube Videos on Anal Cancer.

    PubMed

    Basch, Corey H; Kecojevic, Aleksandar; Berdnik, Alyssa; Cadorett, Valerie; Basch, Charles E

    2017-01-01

    Rates of anal squamous cell carcinoma have increased over recent decades. The aim of this study was to describe characteristics of widely viewed YouTube videos about anal cancer. A total of 57 videos were identified and reviewed. Videos were assessed and coded with respect to date uploaded, upload source, gender of presenter, number of views, length in minutes, number of likes and dislikes, and selected aspects of content. Each video was assessed to determine if the sole purpose of the video was to provide information regarding anal cancer or existed to serve another purpose. Content related to anal cancer was categorized. The mean number of views was 23,548 (range 1014-440,078), and the average length of videos was 8:14 min. The upload source of 57 videos was 19 (33.3%) by consumers, 12 (21.1%) by professional, and 26 (45.6%) by news-based sources. More than half ( n = 30; 52.6%) had the sole purpose of providing information. The most frequently mentioned topics were treatment ( n = 25, 43.9%), symptoms ( n = 15, 26.3%), encouraging screening, human papillomavirus, and pain, respectively ( n = 14, 26.4% for each); only 6 of the 57 videos (10.5%) specifically mentioned prevention. None of 57 most widely viewed videos were uploaded by any agency of the U.S. Public Health Service or by any other U.S. governmental agency. It is important for health practitioners to be aware of the type of information available for their patients on the YouTube platform.

  3. Comparison of the Effect of Toothbrushing Education Via Video, Lecture and Pamphlet on the Dental Plaque Index of 12-Year-Old Children.

    PubMed

    Ramezaninia, Javad; Naghibi Sistani, Mohammad Mehdi; Ahangari, Zohreh; Gholinia, Hemmat; Jahanian, Iman; Gharekhani, Samaneh

    2018-04-11

    The aim of this study was to compare the effect of different modes of toothbrushing education (lecture, video and pamphlet) on the dental plaque index (PI) of adolescents. The cluster randomized intervention was performed on 128 participants aged 12 years, who were allocated into four groups based on the type of intervention. Group 1: no intervention; and groups 2, 3, 4: education via lecture, video, and pamphlet, respectively (n = 32). Their plaque index was measured at the baseline, 24 h and two months later. Data were analyzed by repeated measures analysis of variance (ANOVA), one-way ANOVA, independent and paired t-test. The plaque indices of groups 2, 3, 4 at 24 h (p values < 0.001) and two months (p values < 0.001) showed a significant reduction when compared to the baseline. The lowest PI score was observed in the pamphlet, video and lecture groups at 24 h, respectively. After 2 months, the lowest score of PI was measured in lecture, video and pamphlet groups, respectively; however, these differences were non-significant. Therefore, toothbrushing education via lecture, video and pamphlet reduced the dental plaque index with the same effectiveness.

  4. Evidence-Based Scripted Videos on Handling Student Misbehavior: The Development and Evaluation of Video Cases for Teacher Education

    ERIC Educational Resources Information Center

    Piwowar, Valentina; Barth, Victoria L.; Ophardt, Diemut; Thiel, Felicitas

    2018-01-01

    Scripted videos are based on a screenplay and are a viable and widely used tool for learning. Yet, reservations exist due to limited authenticity and high production costs. The present paper comprehensively describes a video production process for scripted videos on the topic of student misbehavior in the classroom. In a three step…

  5. The Development of Mathematical Knowledge for Teaching of Mathematics Teachers in Lesson Analysis Process

    ERIC Educational Resources Information Center

    Baki, Mujgan

    2015-01-01

    This study aims to explore the role of lesson analysis in the development of mathematical knowledge for teaching. For this purpose, a graduate course based on lesson analysis was designed for novice mathematics teachers. Throughout the course the teachers watched videos of group-mates and discussed the issues they identified in terms of…

  6. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  7. Visualizing and Writing Video Programs.

    ERIC Educational Resources Information Center

    Floyd, Steve

    1979-01-01

    Reviews 10 steps which serve as guidelines to simplify the creative process of producing a video training program: (1) audience analysis, (2) task analysis, (3) definition of objective, (4) conceptualization, (5) visualization, (6) storyboard, (7) video storyboard, (8) evaluation, (9) revision, and (10) production. (LRA)

  8. Geographic Video 3d Data Model And Retrieval

    NASA Astrophysics Data System (ADS)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  9. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  10. What arguments on vaccinations run through YouTube videos in Italy? A content analysis.

    PubMed

    Covolo, Loredana; Ceretti, Elisabetta; Passeri, Chiara; Boletti, Michela; Gelatti, Umberto

    2017-07-03

    The suspension of compulsory scheduling of some pediatric vaccines has been discussed for a long time by health authorities in Italy but the current decrease of vaccination rates is a matter of concern. YouTube is the most popular video-based social media website. Considering the demonstrated impact of internet on vaccination decision-making and the increasing use of social media to share and disseminate health information, the aim of this study was to explore the message available on YouTube videos about vaccination. An observational study was conducted searching for YouTube videos in September 2015 and updated in January 2016, by using the keyword "vaccinations." We included recently posted videos in Italian on child vaccination (2014-2015). Videos were classified according to the message tone. A total of 123 videos were selected. Pro-vaccination videos were 62 (50%), anti-vaccination 28 (23%), neutral or without a clear position in favor or against vaccination 33 (27%). Focusing on the first 2 groups, pro-vaccination videos had a higher number of views compared with those unfavorable (1602 ± 6544 vs 1482 ± 2735) (p < 0.001). However, anti-vaccination videos were liked more by viewers (17.8 ± 31.3) than positive ones (13.2 ± 44.7) (p < 0.001) in addition to being more shared (23 ± 22.6 vs 3.8 ± 5.5, p < 0.001). Most of the videos were positive in tone, but those that disapproved of immunization were the most liked and shared. Considering the growing number of viewers, it is important to monitor the web to understand audience characteristics and what influences public opinions to use communication strategies more effectively.

  11. A brief report on the relationship between self-control, video game addiction and academic achievement in normal and ADHD students.

    PubMed

    Haghbin, Maryam; Shaterian, Fatemeh; Hosseinzadeh, Davood; Griffiths, Mark D

    2013-12-01

    Over the last two decades, research into video game addiction has grown increasingly. The present research aimed to examine the relationship between video game addiction, self-control, and academic achievement of normal and ADHD high school students. Based on previous research it was hypothesized that (i) there would be a relationship between video game addiction, self-control and academic achievement (ii) video game addiction, self-control and academic achievement would differ between male and female students, and (iii) the relationship between video game addiction, self-control and academic achievement would differ between normal students and ADHD students. The research population comprised first grade high school students of Khomeini-Shahr (a city in the central part of Iran). From this population, a sample group of 339 students participated in the study. The survey included the Game Addiction Scale (Lemmens, Valkenburg & Peter, 2009), the Self-Control Scale (Tangney, Baumeister & Boone, 2004) and the ADHD Diagnostic checklist (Kessler et al., 2007). In addition to questions relating to basic demographic information, students' Grade Point Average (GPA) for two terms was used for measuring their academic achievement. These hypotheses were examined using a regression analysis. Among Iranian students, the relationship between video game addiction, self-control, and academic achievement differed between male and female students. However, the relationship between video game addiction, self-control, academic achievement, and type of student was not statistically significant. Although the results cannot demonstrate a causal relationship between video game use, video game addiction, and academic achievement, they suggest that high involvement in playing video games leaves less time for engaging in academic work.

  12. What arguments on vaccinations run through YouTube videos in Italy? A content analysis

    PubMed Central

    Covolo, Loredana; Ceretti, Elisabetta; Passeri, Chiara; Boletti, Michela; Gelatti, Umberto

    2017-01-01

    ABSTRACT Background: The suspension of compulsory scheduling of some pediatric vaccines has been discussed for a long time by health authorities in Italy but the current decrease of vaccination rates is a matter of concern. YouTube is the most popular video-based social media website. Considering the demonstrated impact of internet on vaccination decision-making and the increasing use of social media to share and disseminate health information, the aim of this study was to explore the message available on YouTube videos about vaccination. Methods: An observational study was conducted searching for YouTube videos in September 2015 and updated in January 2016, by using the keyword “vaccinations.” We included recently posted videos in Italian on child vaccination (2014–2015). Videos were classified according to the message tone. Results: A total of 123 videos were selected. Pro-vaccination videos were 62 (50%), anti-vaccination 28 (23%), neutral or without a clear position in favor or against vaccination 33 (27%). Focusing on the first 2 groups, pro-vaccination videos had a higher number of views compared with those unfavorable (1602 ± 6544 vs 1482 ± 2735) (p < 0.001). However, anti-vaccination videos were liked more by viewers (17.8 ± 31.3) than positive ones (13.2 ± 44.7) (p < 0.001) in addition to being more shared (23 ± 22.6 vs 3.8 ± 5.5, p < 0.001). Conclusions: Most of the videos were positive in tone, but those that disapproved of immunization were the most liked and shared. Considering the growing number of viewers, it is important to monitor the web to understand audience characteristics and what influences public opinions to use communication strategies more effectively. PMID:28362544

  13. Direct ophthalmoscopy on YouTube: analysis of instructional YouTube videos’ content and approach to visualization

    PubMed Central

    Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif

    2016-01-01

    Background Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. Methods In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman’s correlation. Results We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8–14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman’s ρ=0.53; P=0.029). Conclusion Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner’s view, and give particular emphasis on fundus examination. PMID:27574393

  14. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  15. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE PAGES

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...

    2016-12-05

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  16. Full-motion video analysis for improved gender classification

    NASA Astrophysics Data System (ADS)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  17. Audio-guided audiovisual data segmentation, indexing, and retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-12-01

    While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.

  18. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  19. Getting the Bigger Picture With Digital Surveillance

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.

  20. Surgical instrument similarity metrics and tray analysis for multi-sensor instrument identification

    NASA Astrophysics Data System (ADS)

    Glaser, Bernhard; Schellenberg, Tobias; Franke, Stefan; Dänzer, Stefan; Neumuth, Thomas

    2015-03-01

    A robust identification of the instrument currently used by the surgeon is crucial for the automatic modeling and analysis of surgical procedures. Various approaches for intra-operative surgical instrument identification have been presented, mostly based on radio-frequency identification (RFID) or endoscopic video analysis. A novel approach is to identify the instruments on the instrument table of the scrub nurse with a combination of video and weight information. In a previous article, we successfully followed this approach and applied it to multiple instances of an ear, nose and throat (ENT) procedure and the surgical tray used therein. In this article, we present a metric for the suitability of the instruments of a surgical tray for identification by video and weight analysis and apply it to twelve trays of four different surgical domains (abdominal surgery, neurosurgery, orthopedics and urology). The used trays were digitized at the central sterile services department of the hospital. The results illustrate that surgical trays differ in their suitability for the approach. In general, additional weight information can significantly contribute to the successful identification of surgical instruments. Additionally, for ten different surgical instruments, ten exemplars of each instrument were tested for their weight differences. The samples indicate high weight variability in instruments with identical brand and model number. The results present a new metric for approaches aiming towards intra-operative surgical instrument detection and imply consequences for algorithms exploiting video and weight information for identification purposes.

  1. Video game-based coordinative training improves ataxia in children with degenerative ataxia.

    PubMed

    Ilg, Winfried; Schatton, Cornelia; Schicks, Julia; Giese, Martin A; Schöls, Ludger; Synofzik, Matthis

    2012-11-13

    Degenerative ataxias in children present a rare condition where effective treatments are lacking. Intensive coordinative training based on physiotherapeutic exercises improves degenerative ataxia in adults, but such exercises have drawbacks for children, often including a lack of motivation for high-frequent physiotherapy. Recently developed whole-body controlled video game technology might present a novel treatment strategy for highly interactive and motivational coordinative training for children with degenerative ataxias. We examined the effectiveness of an 8-week coordinative training for 10 children with progressive spinocerebellar ataxia. Training was based on 3 Microsoft Xbox Kinect video games particularly suitable to exercise whole-body coordination and dynamic balance. Training was started with a laboratory-based 2-week training phase and followed by 6 weeks training in children's home environment. Rater-blinded assessments were performed 2 weeks before laboratory-based training, immediately prior to and after the laboratory-based training period, as well as after home training. These assessments allowed for an intraindividual control design, where performance changes with and without training were compared. Ataxia symptoms were significantly reduced (decrease in Scale for the Assessment and Rating of Ataxia score, p = 0.0078) and balance capacities improved (dynamic gait index, p = 0.04) after intervention. Quantitative movement analysis revealed improvements in gait (lateral sway: p = 0.01; step length variability: p = 0.01) and in goal-directed leg placement (p = 0.03). Despite progressive cerebellar degeneration, children are able to improve motor performance by intensive coordination training. Directed training of whole-body controlled video games might present a highly motivational, cost-efficient, and home-based rehabilitation strategy to train dynamic balance and interaction with dynamic environments in a large variety of young-onset neurologic conditions. This study provides Class III evidence that directed training with Xbox Kinect video games can improve several signs of ataxia in adolescents with progressive ataxia as measured by SARA score, Dynamic Gait Index, and Activity-specific Balance Confidence Scale at 8 weeks of training.

  2. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  3. Digital chalk-talk videos improve knowledge and satisfaction in renal physiology.

    PubMed

    Roberts, John K; Chudgar, Saumil M; Engle, Deborah; McClain, Elizabeth K; Jakoi, Emma; Berkoben, Michael; Lehrich, Ruediger W

    2018-03-01

    The authors began a curriculum reform project to improve the experience in a Renal Physiology course for first-year medical students. Taking into account both the variety of learning preferences among students and the benefits of student autonomy, the authors hypothesized that adding digital chalk-talk videos to lecture notes and live lectures would improve student knowledge, course satisfaction, and engagement. The authors measured performance on the renal physiology exam before (the traditional curriculum) and for 2 yr after implementation of the new curriculum. During the traditional and subsequent years, students took a Q-sort survey before and after the Renal Physiology course. Satisfaction was assessed based on ranked statements in the Q sort, as well as through qualitative analysis of student commentary. Compared with the traditional curriculum, mean scores on the renal physiology final exam were higher after implementation of the new curriculum: 65.3 vs. 74.4 ( P < 0.001) with year 1 and 65.3 vs. 79.4 ( P < 0.001) in the second year. After the new curriculum, students were more likely to agree with the statement, "I wish other courses were taught like this one." Qualitative analysis revealed how the video-based curriculum improved student engagement and satisfaction. Adding digital chalk-talk videos to a traditional Renal Physiology course that included active learning led to improved exam performance and high levels of student satisfaction. Other preclinical courses in medical school may benefit from such an intervention.

  4. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  5. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  6. A Usability Survey of a Contents-Based Video Retrieval System by Combining Digital Video and an Electronic Bulletin Board

    ERIC Educational Resources Information Center

    Haga, Hirohide; Kaneda, Shigeo

    2005-01-01

    This article describes the survey of the usability of a novel content-based video retrieval system. This system combines video streaming and an electronic bulletin board system (BBS). Comments submitted to the BBS are used to index video data. Following the development of the prototype system an experimental survey with ten subjects was performed.…

  7. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    PubMed

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  8. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  9. Deep Sea Gazing: Making Ship-Based Research Aboard RV Falkor Relevant and Accessible

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Zykov, V.; Miller, A.; Pace, L. J.; Ferrini, V. L.; Friedman, A.

    2016-02-01

    Schmidt Ocean Institute (SOI) is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation, and open sharing of information. Our research vessel Falkorprovides ship time to selected scientists and supports a wide range of scientific functions, including ROV operations with live streaming capabilities. Since 2013, SOI has live streamed 55 ROV dives in high definition and recorded them onto YouTube. This has totaled over 327 hours of video which received 1,450, 461 views in 2014. SOI is one of the only research programs that makes their entire dive series available online, creating a rich collection of video data sets. In doing this, we provide an opportunity for scientists to make new discoveries in the video data that may have been missed earlier. These data sets are also available to students, allowing them to engage with real data in the classroom. SOI's video collection is also being used in a newly developed video management system, Ocean Video Lab. Telepresence-enabled research is an important component of Falkor cruises, which is exemplified by several that were conducted in 2015. This presentation will share a few case studies including an image tagging citizen science project conducted through the Squidle interface in partnership with the Australian Center for Field Robotics. Using real-time image data collected in the Timor Sea, numerous shore-based citizens created seafloor image tags that could be used by a machine learning algorithms on Falkor's high performance computer (HPC) to accomplish habitat characterization. With the use of the HPC system real-time robot tracking, image tagging, and other outreach connections were made possible, allowing scientists on board to engage with the public and build their knowledge base. The above mentioned examples will be used to demonstrate the benefits of remote data analysis and participatory engagement in science-based telepresence.

  10. Detection of Abnormal Events via Optical Flow Feature Analysis

    PubMed Central

    Wang, Tian; Snoussi, Hichem

    2015-01-01

    In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm. PMID:25811227

  11. Characteristics of "Music Education" Videos Posted on Youtube

    ERIC Educational Resources Information Center

    Whitaker, Jennifer A.; Orman, Evelyn K.; Yarbrough, Cornelia

    2014-01-01

    This content analysis sought to determine information related to users uploading, general content, and specific characteristics of music education videos on YouTube. A total of 1,761 videos from a keyword search of "music education" were viewed and categorized. Results for relevant videos indicated users posted videos under 698 different…

  12. Bridging the Field Trip Gap: Integrating Web-Based Video as a Teaching and Learning Partner in Interior Design Education

    ERIC Educational Resources Information Center

    Roehl, Amy

    2013-01-01

    This study utilizes web-based video as a strategy to transfer knowledge about the interior design industry in a format that interests the current generation of students. The model of instruction developed is based upon online video as an engaging, economical, and time-saving alternative to a field trip, guest speaker, or video teleconference.…

  13. "SmartMonitor"--an intelligent security system for the protection of individuals and small properties with the possibility of home automation.

    PubMed

    Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław

    2014-06-05

    "SmartMonitor" is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the "SmartMonitor" system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.

  14. Detection of Upscale-Crop and Partial Manipulation in Surveillance Video Based on Sensor Pattern Noise

    PubMed Central

    Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-01-01

    In many court cases, surveillance videos are used as significant court evidence. As these surveillance videos can easily be forged, it may cause serious social issues, such as convicting an innocent person. Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN). We exploit the scaling invariance of the minimum average correlation energy Mellin radial harmonic (MACE-MRH) correlation filter to reliably unveil traces of upscaling in videos. By excluding the high-frequency components of the investigated video and adaptively choosing the size of the local search window, the proposed method effectively localizes partially manipulated regions. Empirical evidence from a large database of test videos, including RGB (Red, Green, Blue)/infrared video, dynamic-/static-scene video and compressed video, indicates the superior performance of the proposed method. PMID:24051524

  15. Portable color multimedia training systems based on monochrome laptop computers (CBT-in-a-briefcase), with spinoff implications for video uplink and downlink in spaceflight operations

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1994-01-01

    This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.

  16. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  17. Which technology to investigate visual perception in sport: video vs. virtual reality.

    PubMed

    Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit

    2015-02-01

    Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. PC-based high-speed video-oculography for measuring rapid eye movements in mice.

    PubMed

    Sakatani, Tomoya; Isa, Tadashi

    2004-05-01

    We newly developed an infrared video-oculographic system for on-line tracking of the eye position in awake and head-fixed mice, with high temporal resolution (240 Hz). The system consists of a commercially available high-speed CCD camera and an image processing software written in LabVIEW run on IBM-PC with a plug-in video grabber board. This software calculates the center and area of the pupil by fitting circular function to the pupil boundary, and allows robust and stable tracking of the eye position in small animals like mice. On-line calculation is performed to obtain reasonable circular fitting of the pupil boundary even if a part of the pupil is covered with shadows or occluded by eyelids or corneal reflections. The pupil position in the 2-D video plane is converted to the rotation angle of the eyeball by estimating its rotation center based on the anatomical eyeball model. By this recording system, it is possible to perform quantitative analysis of rapid eye movements such as saccades in mice. This will provide a powerful tool for analyzing molecular basis of oculomotor and cognitive functions by using various lines of mutant mice.

  19. Comparison of compression efficiency between HEVC/H.265 and VP9 based on subjective assessments

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Ebrahimi, Touradj

    2014-09-01

    Current increasing effort of broadcast providers to transmit UHD (Ultra High Definition) content is likely to increase demand for ultra high definition televisions (UHDTVs). To compress UHDTV content, several alternative encoding mechanisms exist. In addition to internationally recognized standards, open access proprietary options, such as VP9 video encoding scheme, have recently appeared and are gaining popularity. One of the main goals of these encoders is to efficiently compress video sequences beyond HDTV resolution for various scenarios, such as broadcasting or internet streaming. In this paper, a broadcast scenario rate-distortion performance analysis and mutual comparison of one of the latest video coding standards H.265/HEVC with recently released proprietary video coding scheme VP9 is presented. Also, currently one of the most popular and widely spread encoder H.264/AVC has been included into the evaluation to serve as a comparison baseline. The comparison is performed by means of subjective evaluations showing actual differences between encoding algorithms in terms of perceived quality. The results indicate a general dominance of HEVC based encoding algorithm in comparison to other alternatives, while VP9 and AVC showing similar performance.

  20. Novel dynamic caching for hierarchically distributed video-on-demand systems

    NASA Astrophysics Data System (ADS)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  1. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  2. YouTube™ as a Source of Instructional Videos on Bowel Preparation: a Content Analysis.

    PubMed

    Ajumobi, Adewale B; Malakouti, Mazyar; Bullen, Alexander; Ahaneku, Hycienth; Lunsford, Tisha N

    2016-12-01

    Instructional videos on bowel preparation have been shown to improve bowel preparation scores during colonoscopy. YouTube™ is one of the most frequently visited website on the internet and contains videos on bowel preparation. In an era where patients are increasingly turning to social media for guidance on their health, the content of these videos merits further investigation. We assessed the content of bowel preparation videos available on YouTube™ to determine the proportion of YouTube™ videos on bowel preparation that are high-content videos and the characteristics of these videos. YouTube™ videos were assessed for the following content: (1) definition of bowel preparation, (2) importance of bowel preparation, (3) instructions on home medications, (4) name of bowel cleansing agent (BCA), (5) instructions on when to start taking BCA, (6) instructions on volume and frequency of BCA intake, (7) diet instructions, (8) instructions on fluid intake, (9) adverse events associated with BCA, and (10) rectal effluent. Each content parameter was given 1 point for a total of 10 points. Videos with ≥5 points were considered by our group to be high-content videos. Videos with ≤4 points were considered low-content videos. Forty-nine (59 %) videos were low-content videos while 34 (41 %) were high-content videos. There was no association between number of views, number of comments, thumbs up, thumbs down or engagement score, and videos deemed high-content. Multiple regression analysis revealed bowel preparation videos on YouTube™ with length >4 minutes and non-patient authorship to be associated with high-content videos.

  3. Video copy protection and detection framework (VPD) for e-learning systems

    NASA Astrophysics Data System (ADS)

    ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.

    2013-03-01

    This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).

  4. An innovative experimental sequence on electromagnetic induction and eddy currents based on video analysis and cheap data acquisition

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Bozzo, G.; Sapia, P.

    2017-11-01

    In this work, we present a coherent sequence of experiments on electromagnetic (EM) induction and eddy currents, appropriate for university undergraduate students, based on a magnet falling through a drilled aluminum disk. The sequence, leveraging on the didactical interplay between the EM and mechanical aspects of the experiments, allows us to exploit the students’ awareness of mechanics to elicit their comprehension of EM phenomena. The proposed experiments feature two kinds of measurements: (i) kinematic measurements (performed by means of high-speed video analysis) give information on the system’s kinematics and, via appropriate numerical data processing, allow us to get dynamic information, in particular on energy dissipation; (ii) induced electromagnetic field (EMF) measurements (by using a homemade multi-coil sensor connected to a cheap data acquisition system) allow us to quantitatively determine the inductive effects of the moving magnet on its neighborhood. The comparison between experimental results and the predictions from an appropriate theoretical model (of the dissipative coupling between the moving magnet and the conducting disk) offers many educational hints on relevant topics related to EM induction, such as Maxwell’s displacement current, magnetic field flux variation, and the conceptual link between induced EMF and induced currents. Moreover, the didactical activity gives students the opportunity to be trained in video analysis, data acquisition and numerical data processing.

  5. An innovative experiment on superconductivity, based on video analysis and non-expensive data acquisition

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Bozzo, G.; Camarca, M.; Sapia, P.

    2015-07-01

    In this paper we present a new experiment on superconductivity, designed for university undergraduate students, based on the high-speed video analysis of a magnet falling through a ceramic superconducting cylinder (Tc = 110 K). The use of an Atwood’s machine allows us to vary the magnet’s speed and acceleration during its interaction with the superconductor. In this way, we highlight the existence of two interaction regimes: for low crossing energy, the magnet is levitated by the superconductor after a transient oscillatory damping; for higher crossing energy, the magnet passes through the superconducting cylinder. The use of a commercial-grade high speed imaging system, together with video analysis performed using the Tracker software, allows us to attain a good precision in space and time measurements. Four sensing coils, mounted inside and outside the superconducting cylinder, allow us to study the magnetic flux variations in connection with the magnet’s passage through the superconductor, permitting us to shed light on a didactically relevant topic as the behaviour of magnetic field lines in the presence of a superconductor. The critical discussion of experimental data allows undergraduate university students to grasp useful insights on the basic phenomenology of superconductivity as well as on relevant conceptual topics such as the difference between the Meissner effect and the Faraday-like ‘perfect’ induction.

  6. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  7. Data Management Rubric for Video Data in Organismal Biology.

    PubMed

    Brainerd, Elizabeth L; Blob, Richard W; Hedrick, Tyson L; Creamer, Andrew T; Müller, Ulrike K

    2017-07-01

    Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, "Establishing Standards for Video Data Management," at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5-9 establish minimum metadata standards for organismal biology video, and suggest additional metadata that may be useful for some studies. This rubric was developed with substantial input from researchers and students, but still should be viewed as a living document that should be further refined and updated as technology and research practices change. The audience for these standards includes researchers, journals, and granting agencies, and also the developers and curators of databases that may contribute to video data sharing efforts. We offer this project as an example of building community consensus for data management, preservation, and sharing standards, which may be useful for future efforts by the organismal biology research community. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology.

  8. Data Management Rubric for Video Data in Organismal Biology

    PubMed Central

    Brainerd, Elizabeth L.; Blob, Richard W.; Hedrick, Tyson L.; Creamer, Andrew T.; Müller, Ulrike K.

    2017-01-01

    Synopsis Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, “Establishing Standards for Video Data Management,” at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5–9 establish minimum metadata standards for organismal biology video, and suggest additional metadata that may be useful for some studies. This rubric was developed with substantial input from researchers and students, but still should be viewed as a living document that should be further refined and updated as technology and research practices change. The audience for these standards includes researchers, journals, and granting agencies, and also the developers and curators of databases that may contribute to video data sharing efforts. We offer this project as an example of building community consensus for data management, preservation, and sharing standards, which may be useful for future efforts by the organismal biology research community. PMID:28881939

  9. Video conference quality assessment based on cooperative sensing of video and audio

    NASA Astrophysics Data System (ADS)

    Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu

    2015-12-01

    This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.

  10. Health Education and Symptom Flare Management Using a Video-Based m-Health System for Caring Women with IC/BPS.

    PubMed

    Lee, Ming-Huei; Wu, Huei-Ching; Tseng, Chien-Ming; Ko, Tsung-Liang; Weng, Tang-Jun; Chen, Yung-Fu

    2018-06-10

    To assess effectiveness of the video-based m-health system providing videos dictated by physicians for health education and symptom self-management for patients with IC/BPS. An m-health system was designed to provide videos for weekly health education and symptom flare self-management. O'Leary-Sant index and VAS scale as well as SF-36 health survey were administrated to evaluate the disease severity and quality of life (QoL), respectively. A total of 60 IC/BPS patients were recruited and randomly assigned to either control group (30 patients) or study group (30 patients) in sequence depending on their orders to visit our urological clinic. Patients in both control and study groups received regular treatments, while those in the study group received additional video-based intervention. Statistical analyses were conducted to compare the outcomes between baseline and post-intervention for both groups. The outcomes of video-based intervention were also compared with the text-based intervention conducted in our previous study. After video-based intervention, patients in the study group exhibited significant effect manifested in all disease severity and QoL assessments except the VAS pain scale, while no significance was found in the control group. Moreover, the study group exhibited more significant net improvements than the control group in 7 SF-36 constructs, except the mental health. The limitations include short intervention duration (8 weeks) and different study periods between text-based and video-based interventions. Video-based intervention is effective in improving the QoL of IC/BPS patients and outperforms the text-based intervention even in a short period of intervention. Copyright © 2018. Published by Elsevier Inc.

  11. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  12. Surgical gesture classification from video and kinematic data.

    PubMed

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. NREM Arousal Parasomnias and Their Distinction from Nocturnal Frontal Lobe Epilepsy: A Video EEG Analysis

    PubMed Central

    Derry, Christopher P.; Harvey, A. Simon; Walker, Matthew C.; Duncan, John S.; Berkovic, Samuel F.

    2009-01-01

    Study Objectives. To describe the semiological features of NREM arousal parasomnias in detail and identify features that can be used to reliably distinguish parasomnias from nocturnal frontal lobe epilepsy (NFLE). Design. Systematic semiologial evaluation of parasomnias and NFLE seizures recorded on video-EEG monitoring. Patients. 120 events (57 parasomnias, 63 NFLE seizures) from 44 subjects (14 males). Interventions. The presence or absence of 68 elemental clinical features was determined in parasomnias and NFLE seizures. Qualitative analysis of behavior patterns and ictal EEG was undertaken. Statistical analysis was undertaken using established techniques. Results. Elemental clinical features strongly favoring parasomnias included: interactive behavior, failure to wake after event, and indistinct offset (all P < 0.001). Cluster analysis confirmed differences in both the frequency and combination of elemental features in parasomnias and NFLE. A diagnostic decision tree generated from these data correctly classified 94% of events. While sleep stage at onset was discriminatory (82% of seizures occurred during stage 1 or 2 sleep, with 100% of parasomnias occurring from stage 3 or 4 sleep), ictal EEG features were less useful. Video analysis of parasomnias identified three principal behavioral patterns: arousal behavior (92% of events); non-agitated motor behavior (72%); distressed emotional behavior (51%). Conclusions Our results broadly support the concept of confusion arousals, somnambulism and night terrors as prototypical behavior patterns of NREM parasomnias, but as a hierarchical continuum rather than distinct entities. Our observations provide an evidence base to assist in the clinical diagnosis of NREM parasomnias, and their distinction from NFLE seizures, on semiological grounds. Citation: Derry CP; Harvey AS; Walker MC; Duncan JS; Berkovic SF. NREM arousal parasomnias and their distinction from nocturnal frontal lobe epilepsy: a video EEG analysis. SLEEP 2009;32(12):1637-1644. PMID:20041600

  14. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  15. STS-107 Debris Characterization Using Re-entry Imaging

    NASA Technical Reports Server (NTRS)

    Raiche, George A.

    2009-01-01

    Analysis of amateur video of the early reentry phases of the Columbia accident is discussed. With poor video quality and little theoretical guidance, the analysis team estimated mass and acceleration ranges for the debris shedding events observed in the video. Camera calibration and optical performance issues are also described.

  16. Magnetic Braking: A Video Analysis

    ERIC Educational Resources Information Center

    Molina-Bolivar, J. A.; Abella-Palacios, A. J.

    2012-01-01

    This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in…

  17. Video Analysis of Muscle Motion

    ERIC Educational Resources Information Center

    Foster, Boyd

    2004-01-01

    In this article, the author discusses how video cameras can help students in physical education and sport science classes successfully learn and present anatomy and kinesiology content at levels. Video analysis of physical activity is an excellent way to expand student knowledge of muscle location and function, planes and axes of motion, and…

  18. Real-time lens distortion correction: speed, accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  19. Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.

    PubMed

    Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B

    1995-11-01

    Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.

  20. Human visual system-based smoking event detection

    NASA Astrophysics Data System (ADS)

    Odetallah, Amjad D.; Agaian, Sos S.

    2012-06-01

    Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.

  1. Semantic-based surveillance video retrieval.

    PubMed

    Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve

    2007-04-01

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.

  2. Power-rate-distortion analysis for wireless video communication under energy constraint

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq

    2004-01-01

    In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.

  3. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  4. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  5. Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering

    PubMed Central

    Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider

    2010-01-01

    Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894

  6. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  7. Minimally invasive surgical video analysis: a powerful tool for surgical training and navigation.

    PubMed

    Sánchez-González, P; Oropesa, I; Gómez, E J

    2013-01-01

    Analysis of minimally invasive surgical videos is a powerful tool to drive new solutions for achieving reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. This paper presents how video analysis contributes to the development of new cognitive and motor training and assessment programs as well as new paradigms for image-guided surgery.

  8. Knowledge of Algebra for Teaching: A Framework of Knowledge and Practices

    ERIC Educational Resources Information Center

    McCrory, Raven; Floden, Robert; Ferrini-Mundy, Joan; Reckase, Mark D.; Senk, Sharon L.

    2012-01-01

    Defining what teachers need to know to teach algebra successfully is important for informing teacher preparation and professional development efforts. Based on prior research, analysis of video, interviews with teachers, and analysis of textbooks, we define categories of knowledge and practices of teaching for understanding and assessing teachers'…

  9. Bringing in the Bard: Shakespearean Plays as Context for Instrumental Analysis Projects

    ERIC Educational Resources Information Center

    Kloepper, Kathryn D.

    2015-01-01

    Scenes from the works of William Shakespeare were incorporated into individual and group projects for an upper-level chemistry class, instrumental analysis. Students read excerpts from different plays and then viewed a corresponding video clip from a stage or movie production. Guided-research assignments were developed based on these scenes. These…

  10. The impact of complete denture making instructional videos on self-directed learning of clinical skills.

    PubMed

    Kon, Haruka; Botelho, Michael George; Bridges, Susan; Leung, Katherine Chiu Man

    2015-04-01

    The aim of this research was to evaluate the effectiveness of a clinical instructional video with a structured worksheet for independent self-study in a complete denture program. 47 multilingual dental students completed a task by watching an instructional video with subtitles regarding clinical complete denture procedures. After completion, students evaluated their learning experience, and 11 students participated in focus group interviews to gain further insight. A mixed-methods approach to data collection and analysis provided descriptive statistical results and a grounded theory approach to coding identified key concepts and categories from the qualitative data. Over 70% of students had favorable opinions of the learning experience and indicated that the speed and length of the video were appropriate. Highly positive and conflicting negative comments regarding the use of subtitles showed both preferences for subtitles over audio and vice versa. The use of a video resource was considered valuable as the replay and review functions allowed better visualization of the procedures, which was considered a good recap tool for the clinical demonstration. It was also a better revision aid than textbooks. So, if the students were able to view these videos at will, they believed that videos supplemented their self-study. Despite the positive response, videos were not considered to replace live clinical demonstrations. While students preferred live demonstrations over the clinical videos they did express a realization of these as a supplemental learning material for self-study based on their ease of access, use for revision, and prior to clinical preparation. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  11. The Use of Video Cases in a Multimedia Learning Environment for Facilitating High School Students' Inquiry into a Problem from Varying Perspectives

    NASA Astrophysics Data System (ADS)

    Zydney, Janet Mannheimer; Grincewicz, Amy

    2011-12-01

    This study investigated the connection between the use of video cases within a multimedia learning environment and students' inquiry into a socio-scientific problem. The software program was designed based on principles from the Cognitive Flexibility Theory (CFT) and incorporated video cases of experts with differing perspectives. Seventy-nine 10th-grade students in an urban high school participated in this study. After watching the expert videos, students generated investigative questions and reflected on how their ideas changed over time. This study found a significant correlation between the time students spent watching the expert videos and their ability to consider the problem's perspectives as well as their ability to integrate these perspectives within their questions. Moreover, problem-solving ability and time watching the videos were detected as possible influential predictors of students' consideration of the problem's perspectives within their questions. Although students watched all video cases in equivalent ways, one of the video cases, which incorporated multiple perspectives as opposed to just presenting one perspective, appeared most influential in helping students integrate the various perspectives into their own thinking. A qualitative analysis of students' reflections indicated that many students appreciated the complexity, authenticity, and ethical dimensions of the problem. It also revealed that while the majority of students thought critically about the problem, some students still had naïve or simplistic ways of thinking. This study provided some preliminary evidence that offering students the opportunity to watch videos of different perspectives may influence them to think in alternative ways about a complex problem.

  12. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  13. Automated UAV-based mapping for airborne reconnaissance and video exploitation

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre

    2009-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.

  14. A method of intentional movement estimation of oblique small-UAV videos stabilized based on homography model

    NASA Astrophysics Data System (ADS)

    Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi

    2013-05-01

    The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.

  15. Activity recognition using Video Event Segmentation with Text (VEST)

    NASA Astrophysics Data System (ADS)

    Holloway, Hillary; Jones, Eric K.; Kaluzniacki, Andrew; Blasch, Erik; Tierno, Jorge

    2014-06-01

    Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.

  16. Effect of Stereoscopic Anaglyphic 3-Dimensional Video Didactics on Learning Neuroanatomy.

    PubMed

    Goodarzi, Amir; Monti, Sara; Lee, Darrin; Girgis, Fady

    2017-11-01

    The teaching of neuroanatomy in medical education has historically been based on didactic instruction, cadaveric dissections, and intraoperative experience for students. Multiple novel 3-dimensional (3D) modalities have recently emerged. Among these, stereoscopic anaglyphic video is easily accessible and affordable, however, its effects have not yet formally been investigated. This study aimed to investigate if 3D stereoscopic anaglyphic video instruction in neuroanatomy could improve learning for content-naive students, as compared with 2-dimensional (2D) video instruction. A single-site controlled prospective case control study was conducted at the School of Education. Content knowledge was assessed at baseline, followed by the presentation of an instructional neuroanatomy video. Participants viewed the video in either 2D or 3D format and then completed a written test of skull base neuroanatomy. Pretest and post-test performances were analyzed with independent Student's t-tests and analysis of covariance. Our study was completed by 249 subjects. At baseline, the 2D (n = 124, F = 97) and 3D groups (n = 125, F = 96) were similar, although the 3D group was older by 1.7 years (P = 0.0355) and the curricula of participating classes differed (P < 0.0001). Average scores for the 3D group were higher for both pretest (2D, M = 19.9%, standard deviation [SD] = 12.5% vs. 3D, M = 23.9%, SD = 14.9%, P = 0.0234) and post-test performances (2D, M = 68.5%, SD = 18.6% vs. 3D, M = 77.3%, SD = 18.8%, P = 0.003), but the magnitude of improvement across groups did not reach statistical significance (2D, M = 48.7%, SD = 21.3%, vs. 3D, M = 53.5%, SD = 22.7%, P = 0.0855). Incorporation of 3D video instruction into curricula without careful integration is insufficient to promote learning over 2D video. Published by Elsevier Inc.

  17. Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2015-02-01

    The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.

  18. An Economic Evaluation of a Video- and Text-Based Computer-Tailored Intervention for Smoking Cessation: A Cost-Effectiveness and Cost-Utility Analysis of a Randomized Controlled Trial

    PubMed Central

    Stanczyk, Nicola E.; Smit, Eline S.; Schulz, Daniela N.; de Vries, Hein; Bolman, Catherine; Muris, Jean W. M.; Evers, Silvia M. A. A.

    2014-01-01

    Background Although evidence exists for the effectiveness of web-based smoking cessation interventions, information about the cost-effectiveness of these interventions is limited. Objective The study investigated the cost-effectiveness and cost-utility of two web-based computer-tailored (CT) smoking cessation interventions (video- vs. text-based CT) compared to a control condition that received general text-based advice. Methods In a randomized controlled trial, respondents were allocated to the video-based condition (N = 670), the text-based condition (N = 708) or the control condition (N = 721). Societal costs, smoking status, and quality-adjusted life years (QALYs; EQ-5D-3L) were assessed at baseline, six-and twelve-month follow-up. The incremental costs per abstinent respondent and per QALYs gained were calculated. To account for uncertainty, bootstrapping techniques and sensitivity analyses were carried out. Results No significant differences were found in the three conditions regarding demographics, baseline values of outcomes and societal costs over the three months prior to baseline. Analyses using prolonged abstinence as outcome measure indicated that from a willingness to pay of €1,500, the video-based intervention was likely to be the most cost-effective treatment, whereas from a willingness to pay of €50,400, the text-based intervention was likely to be the most cost-effective. With regard to cost-utilities, when quality of life was used as outcome measure, the control condition had the highest probability of being the most preferable treatment. Sensitivity analyses yielded comparable results. Conclusion The video-based CT smoking cessation intervention was the most cost-effective treatment for smoking abstinence after twelve months, varying the willingness to pay per abstinent respondent from €0 up to €80,000. With regard to cost-utility, the control condition seemed to be the most preferable treatment. Probably, more time will be required to assess changes in quality of life. Future studies with longer follow-up periods are needed to investigate whether cost-utility results regarding quality of life may change in the long run. Trial Registration Nederlands Trial Register NTR3102 PMID:25310007

  19. An economic evaluation of a video- and text-based computer-tailored intervention for smoking cessation: a cost-effectiveness and cost-utility analysis of a randomized controlled trial.

    PubMed

    Stanczyk, Nicola E; Smit, Eline S; Schulz, Daniela N; de Vries, Hein; Bolman, Catherine; Muris, Jean W M; Evers, Silvia M A A

    2014-01-01

    Although evidence exists for the effectiveness of web-based smoking cessation interventions, information about the cost-effectiveness of these interventions is limited. The study investigated the cost-effectiveness and cost-utility of two web-based computer-tailored (CT) smoking cessation interventions (video- vs. text-based CT) compared to a control condition that received general text-based advice. In a randomized controlled trial, respondents were allocated to the video-based condition (N = 670), the text-based condition (N = 708) or the control condition (N = 721). Societal costs, smoking status, and quality-adjusted life years (QALYs; EQ-5D-3L) were assessed at baseline, six-and twelve-month follow-up. The incremental costs per abstinent respondent and per QALYs gained were calculated. To account for uncertainty, bootstrapping techniques and sensitivity analyses were carried out. No significant differences were found in the three conditions regarding demographics, baseline values of outcomes and societal costs over the three months prior to baseline. Analyses using prolonged abstinence as outcome measure indicated that from a willingness to pay of €1,500, the video-based intervention was likely to be the most cost-effective treatment, whereas from a willingness to pay of €50,400, the text-based intervention was likely to be the most cost-effective. With regard to cost-utilities, when quality of life was used as outcome measure, the control condition had the highest probability of being the most preferable treatment. Sensitivity analyses yielded comparable results. The video-based CT smoking cessation intervention was the most cost-effective treatment for smoking abstinence after twelve months, varying the willingness to pay per abstinent respondent from €0 up to €80,000. With regard to cost-utility, the control condition seemed to be the most preferable treatment. Probably, more time will be required to assess changes in quality of life. Future studies with longer follow-up periods are needed to investigate whether cost-utility results regarding quality of life may change in the long run. Nederlands Trial Register NTR3102.

  20. Video pulse rate variability analysis in stationary and motion conditions.

    PubMed

    Melchor Rodríguez, Angel; Ramos-Castro, J

    2018-01-29

    In the last few years, some studies have measured heart rate (HR) or heart rate variability (HRV) parameters using a video camera. This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. To date, most of these works have obtained HRV parameters in stationary conditions, and there are practically no studies that obtain these parameters in motion scenarios and by conducting an in-depth statistical analysis. In this study, a video pulse rate variability (PRV) analysis is conducted by measuring the pulse-to-pulse (PP) intervals in stationary and motion conditions. Firstly, given the importance of the sampling rate in a PRV analysis and the low frame rate of commercial cameras, we carried out an analysis of two models to evaluate their performance in the measurements. We propose a selective tracking method using the Viola-Jones and KLT algorithms, with the aim of carrying out a robust video PRV analysis in stationary and motion conditions. Data and results of the proposed method are contrasted with those reported in the state of the art. The webcam achieved better results in the performance analysis of video cameras. In stationary conditions, high correlation values were obtained in PRV parameters with results above 0.9. The PP time series achieved an RMSE (mean ± standard deviation) of 19.45 ± 5.52 ms (1.70 ± 0.75 bpm). In the motion analysis, most of the PRV parameters also achieved good correlation results, but with lower values as regards stationary conditions. The PP time series presented an RMSE of 21.56 ± 6.41 ms (1.79 ± 0.63 bpm). The statistical analysis showed good agreement between the reference system and the proposed method. In stationary conditions, the results of PRV parameters were improved by our method in comparison with data reported in related works. An overall comparative analysis of PRV parameters in motion conditions was more limited due to the lack of studies or studies containing insufficient data analysis. Based on the results, the proposed method could provide a low-cost, contactless and reliable alternative for measuring HR or PRV parameters in non-clinical environments.

  1. Video Games: A Human Factors Guide to Visual Display Design and Instructional System Design

    DTIC Science & Technology

    1984-04-01

    Electronic video games have many of the same technological and psychological characteristics that are found in military computer-based systems. For...both of which employ video games as experimental stimuli, are presented here. The first research program seeks to identify and exploit the...characteristics of video games in the design of game-based training devices. The second program is designed to explore the effects of electronic video display

  2. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  3. Video-Based Big Data Analytics in Cyberlearning

    ERIC Educational Resources Information Center

    Wang, Shuangbao; Kelly, William

    2017-01-01

    In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…

  4. Evaluation of architectures for an ASP MPEG-4 decoder using a system-level design methodology

    NASA Astrophysics Data System (ADS)

    Garcia, Luz; Reyes, Victor; Barreto, Dacil; Marrero, Gustavo; Bautista, Tomas; Nunez, Antonio

    2005-06-01

    Trends in multimedia consumer electronics, digital video and audio, aim to reach users through low-cost mobile devices connected to data broadcasting networks with limited bandwidth. An emergent broadcasting network is the digital audio broadcasting network (DAB) which provides CD quality audio transmission together with robustness and efficiency techniques to allow good quality reception in motion conditions. This paper focuses on the system-level evaluation of different architectural options to allow low bandwidth digital video reception over DAB, based on video compression techniques. Profiling and design space exploration techniques are applied over the ASP MPEG-4 decoder in order to find out the best HW/SW partition given the application and platform constraints. An innovative SystemC-based system-level design tool, called CASSE, is being used for modelling, exploration and evaluation of different ASP MPEG-4 decoder HW/SW partitions. System-level trade offs and quantitative data derived from this analysis are also presented in this work.

  5. Real-time action recognition using a multilayer descriptor with variable size

    NASA Astrophysics Data System (ADS)

    Alcantara, Marlon F.; Moreira, Thierry P.; Pedrini, Helio

    2016-01-01

    Video analysis technology has become less expensive and more powerful in terms of storage resources and resolution capacity, promoting progress in a wide range of applications. Video-based human action detection has been used for several tasks in surveillance environments, such as forensic investigation, patient monitoring, medical training, accident prevention, and traffic monitoring, among others. We present a method for action identification based on adaptive training of a multilayer descriptor applied to a single classifier. Cumulative motion shapes (CMSs) are extracted according to the number of frames present in the video. Each CMS is employed as a self-sufficient layer in the training stage but belongs to the same descriptor. A robust classification is achieved through individual responses of classifiers for each layer, and the dominant result is used as a final outcome. Experiments are conducted on five public datasets (Weizmann, KTH, MuHAVi, IXMAS, and URADL) to demonstrate the effectiveness of the method in terms of accuracy in real time.

  6. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  7. Having mentors and campus social networks moderates the impact of worries and video gaming on depressive symptoms: a moderated mediation analysis

    PubMed Central

    2014-01-01

    Background Easy access to the internet has spawned a wealth of research to investigate the effects of its use on depression. However, one limitation of many previous studies is that they disregard the interactive mechanisms of risk and protective factors. The aim of the present study was to investigate a resilience model in the relationship between worry, daily internet video game playing, daily sleep duration, mentors, social networks and depression, using a moderated mediation analysis. Methods 6068 Korean undergraduate and graduate students participated in this study. The participants completed a web-based mental health screening questionnaire including the Beck Depression Inventory (BDI) and information about number of worries, number of mentors, number of campus social networks, daily sleep duration, daily amount of internet video game playing and daily amount of internet searching on computer or smartphone. A moderated mediation analysis was carried out using the PROCESS macro which allowed the inclusion of mediators and moderator in the same model. Results The results showed that the daily amount of internet video game playing and daily sleep duration partially mediated the association between the number of worries and the severity of depression. In addition, the mediating effect of the daily amount of internet video game playing was moderated by both the number of mentors and the number of campus social networks. Conclusions The current findings indicate that the negative impact of worry on depression through internet video game playing can be buffered when students seek to have a number of mentors and campus social networks. Interventions should therefore target individuals who have higher number of worries but seek only a few mentors or campus social networks. Social support via campus mentorship and social networks ameliorate the severity of depression in university students. PMID:24884864

  8. Use of Video Analysis System for Working Posture Evaluations

    NASA Technical Reports Server (NTRS)

    McKay, Timothy D.; Whitmore, Mihriban

    1994-01-01

    In a work environment, it is important to identify and quantify the relationship among work activities, working posture, and workplace design. Working posture may impact the physical comfort and well-being of individuals, as well as performance. The Posture Video Analysis Tool (PVAT) is an interactive menu and button driven software prototype written in Supercard (trademark). Human Factors analysts are provided with a predefined set of options typically associated with postural assessments and human performance issues. Once options have been selected, the program is used to evaluate working posture and dynamic tasks from video footage. PVAT has been used to evaluate postures from Orbiter missions, as well as from experimental testing of prototype glove box designs. PVAT can be used for video analysis in a number of industries, with little or no modification. It can contribute to various aspects of workplace design such as training, task allocations, procedural analyses, and hardware usability evaluations. The major advantage of the video analysis approach is the ability to gather data, non-intrusively, in restricted-access environments, such as emergency and operation rooms, contaminated areas, and control rooms. Video analysis also provides the opportunity to conduct preliminary evaluations of existing work areas.

  9. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  10. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  11. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  12. Mindfulness training for smokers via web-based video instruction with phone support: a prospective observational study.

    PubMed

    Davis, James M; Manley, Alison R; Goldberg, Simon B; Stankevitz, Kristin A; Smith, Stevens S

    2015-03-29

    Many smokers are unable to access effective behavioral smoking cessation therapies due to location, financial limitations, schedule, transportation issues or other reasons. We report results from a prospective observational study in which a promising novel behavioral intervention, Mindfulness Training for Smokers was provided via web-based video instruction with telephone-based counseling support. Data were collected on 26 low socioeconomic status smokers. Participants were asked to watch eight video-based classes describing mindfulness skills and how to use these skills to overcome various core challenges in tobacco dependence. Participants received eight weekly phone calls from a smoking cessation coach who provided general support and answered questions about the videos. On the quit day, participants received two weeks of nicotine patches. Participants were a mean of 40.5 years of age, smoked 16.31 cigarettes per day for 21.88 years, with a mean of 6.81 prior failed quit attempts. Participants completed a mean of 5.55 of 8 online video classes with a mean of 23.33 minutes per login, completed a mean of 3.19 of 8 phone coach calls, and reported a mean meditation practice time of 12.17 minutes per day. Smoking abstinence was defined as self-reported abstinence on a smoking calendar with biochemical confirmation via carbon monoxide breath-test under 7 parts per million. Intent-to-treat analysis demonstrated 7-day point prevalence smoking abstinence at 4 and 6-months post-quit of 23.1% and 15.4% respectively. Participants showed a significant pre- to post-intervention increase in mindfulness as measured by the Five-Factor Mindfulness Questionnaire, and a significant pre- to post-intervention decrease in the Anxiety Sub-scale of the Depression Anxiety and Stress Scale. Results suggest that Mindfulness Training for Smokers can be provided via web-based video instruction with phone support and yield reasonable participant engagement on intervention practices and that intervention efficacy and mechanism of effect deserve further study. ClinicalTrials.gov: NCT02164656 , Registration Date June 13, 2014.

  13. HealthRecSys: A semantic content-based recommender system to complement health videos.

    PubMed

    Sanchez Bocanegra, Carlos Luis; Sevillano Ramos, Jose Luis; Rizo, Carlos; Civit, Anton; Fernandez-Luque, Luis

    2017-05-15

    The Internet, and its popularity, continues to grow at an unprecedented pace. Watching videos online is very popular; it is estimated that 500 h of video are uploaded onto YouTube, a video-sharing service, every minute and that, by 2019, video formats will comprise more than 80% of Internet traffic. Health-related videos are very popular on YouTube, but their quality is always a matter of concern. One approach to enhancing the quality of online videos is to provide additional educational health content, such as websites, to support health consumers. This study investigates the feasibility of building a content-based recommender system that links health consumers to reputable health educational websites from MedlinePlus for a given health video from YouTube. The dataset for this study includes a collection of health-related videos and their available metadata. Semantic technologies (such as SNOMED-CT and Bio-ontology) were used to recommend health websites from MedlinePlus. A total of 26 healths professionals participated in evaluating 253 recommended links for a total of 53 videos about general health, hypertension, or diabetes. The relevance of the recommended health websites from MedlinePlus to the videos was measured using information retrieval metrics such as the normalized discounted cumulative gain and precision at K. The majority of websites recommended by our system for health videos were relevant, based on ratings by health professionals. The normalized discounted cumulative gain was between 46% and 90% for the different topics. Our study demonstrates the feasibility of using a semantic content-based recommender system to enrich YouTube health videos. Evaluation with end-users, in addition to healthcare professionals, will be required to identify the acceptance of these recommendations in a nonsimulated information-seeking context.

  14. Music Video: An Analysis at Three Levels.

    ERIC Educational Resources Information Center

    Burns, Gary

    This paper is an analysis of the different aspects of the music video. Music video is defined as having three meanings: an individual clip, a format, or the "aesthetic" that describes what the clips and format look like. The paper examines interruptions, the dialectical tension and the organization of the work of art, shot-scene…

  15. Facilitating Video Analysis for Teacher Development: A Systematic Review of the Research

    ERIC Educational Resources Information Center

    Baecher, Laura; Kung, Shiao-Chuan; Ward, Sarah Laleman; Kern, Kimberly

    2018-01-01

    Video analysis of classroom practice as a tool in teacher professional learning has become ever more widely used, with hundreds of articles published on the topic over the past decade. When designing effective professional development for teachers using video, facilitators turn to the literature to identify promising approaches. This article…

  16. Experience of Adult Facilitators in a Virtual-Reality-Based Social Interaction Program for Children with Autism

    ERIC Educational Resources Information Center

    Ke, Fengfeng; Im, Tami; Xue, Xinrong; Xu, Xinhao; Kim, Namju; Lee, Sungwoong

    2015-01-01

    This phenomenological study explored and described the experiences and perceptions of adult facilitators who facilitated virtual-reality-based social interaction for children with autism. Extensive data were collected from iterative, in-depth interviews; online activities observation; and video analysis. Four salient themes emerged through the…

  17. Evaluating a Web-Based Video Corpus through an Analysis of User Interactions

    ERIC Educational Resources Information Center

    Caws, Catherine G.

    2013-01-01

    As shown by several studies, successful integration of technology in language learning requires a holistic approach in order to scientifically understand what learners do when working with web-based technology (cf. Raby, 2007). Additionally, a growing body of research in computer assisted language learning (CALL) evaluation, design and…

  18. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-06-21

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team.

  19. Deep learning architecture for recognition of abnormal activities

    NASA Astrophysics Data System (ADS)

    Khatrouch, Marwa; Gnouma, Mariem; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    The video surveillance is one of the key areas in computer vision researches. The scientific challenge in this field involves the implementation of automatic systems to obtain detailed information about individuals and groups behaviors. In particular, the detection of abnormal movements of groups or individuals requires a fine analysis of frames in the video stream. In this article, we propose a new method to detect anomalies in crowded scenes. We try to categorize the video in a supervised mode accompanied by unsupervised learning using the principle of the autoencoder. In order to construct an informative concept for the recognition of these behaviors, we use a technique of representation based on the superposition of human silhouettes. The evaluation of the UMN dataset demonstrates the effectiveness of the proposed approach.

  20. Development and Assessment of a Chemistry-Based Computer Video Game as a Learning Tool

    ERIC Educational Resources Information Center

    Martinez-Hernandez, Kermin Joel

    2010-01-01

    The chemistry-based computer video game is a multidisciplinary collaboration between chemistry and computer graphics and technology fields developed to explore the use of video games as a possible learning tool. This innovative approach aims to integrate elements of commercial video game and authentic chemistry context environments into a learning…

Top